<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
					xmlns:content="http://purl.org/rss/1.0/modules/content/"
					xmlns:wfw="http://wellformedweb.org/CommentAPI/"
				  >
<channel>
<title>Navicat Blog</title>
<link>https://www.navicat.com/company/aboutus/blog</link>
<language>en-us</language>
<pubDate>Mon, 06 Apr 2026 11:23:02 +0000</pubDate>
<item>
<title>The Hidden Costs of Cloud Database Services (and When On-Prem Makes More Financial Sense)</title>
<link>https://www.navicat.com/company/aboutus/blog/3816-the-hidden-costs-of-cloud-database-services-and-when-on-prem-makes-more-financial-sense.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en">  <head>    <title>The Hidden Costs of Cloud Database Services (and When On-Prem Makes More Financial Sense)</title>  </head>  <body>    <b>Mar 27, 2026</b> by Robert Gravelle<br/><br/>    <p>Cloud database services are easy to love at the start. You sign up, provision a database instance in minutes, and pay only for what you use. There's no hardware to buy, no data center to maintain, and no upfront capital commitment. For early-stage projects and small teams, this model is genuinely hard to beat. But as workloads mature and data volumes grow, the financial picture often becomes more complicated - and more expensive - than the initial simplicity suggested.</p>        <h1 class="blog-sub-title">The Sticker Price Is Just the Beginning</h1>    <p>Cloud providers price their database services in ways that make small-scale usage look attractively cheap but cause costs to compound quickly as usage scales. The base instance cost is only the starting point. Storage is billed separately, and in most managed database services, storage pricing is meaningfully higher than raw object storage costs. That's because you're paying for managed, redundant, high-performance disk, not just bytes on a drive. Backup storage is often billed on top of that, and retaining backups for compliance purposes can add up to a surprisingly large monthly line item.</p>    <p>Compute costs follow a similar pattern. The instance sizes that handle light development traffic become inadequate as production workloads grow, and stepping up to the next tier often means a significant jump in hourly cost. Reserved instance pricing can reduce this, but it requires committing to one or three years of usage upfront, which reintroduces a form of capital commitment that cloud was supposed to eliminate.</p>    <h1 class="blog-sub-title">Egress Fees: The Cost Nobody Talks About Enough</h1>    <p>One of the most underappreciated costs in cloud database operations is data egress, i.e., what you pay to move data out of the cloud provider's network. Ingress (data coming in) is typically free. Egress (data going out) is not, and the rates can be substantial when you're regularly transferring large result sets to analytics platforms, downstream applications, or on-premise systems. Organizations that run hybrid architectures - with some systems in the cloud and others on-prem - often discover that inter-environment data movement is quietly one of their larger cloud expenses.</p>    <p>This is worth thinking about carefully during architecture planning, because the impact isn't always obvious until you're already paying for it. A reporting pipeline that runs daily queries and exports results to an on-prem data warehouse might look cheap in compute terms but become expensive once egress is factored in.</p>    <h1 class="blog-sub-title">Operational Costs Don't Disappear (They Transform)</h1>    <p>A common argument for cloud database services is that they eliminate operational overhead: no DBAs patching servers, no hardware failures to diagnose, no capacity planning to worry about. This is partially true, but it replaces one set of operational concerns with another. Someone still needs to manage database configurations, monitor performance, tune queries, manage credentials and access controls, and respond to incidents. What changes is the nature of the work, not the need for skilled people to do it.</p>    <p>Tooling costs also tend to accumulate alongside cloud database spending. Monitoring, observability, backup management, and security scanning are all areas where organizations commonly bolt on third-party services - each with its own subscription fee - to fill gaps in what the cloud provider offers natively.</p>    <h1 class="blog-sub-title">When On-Prem Makes More Financial Sense</h1>    <p>The economics of on-premise infrastructure tend to favor organizations that have steady, predictable workloads rather than spiky or seasonal demand. If you're running database servers at consistently high utilization, say, above 60 to 70 percent, the cost per unit of compute on owned hardware is typically lower than the equivalent cloud instance cost over a three-to-five year hardware lifecycle. The crossover point varies by organization, but it's often reached earlier than people expect.</p>    <p>Organizations that have already invested in a data center, network infrastructure, and an in-house IT team to manage it are in a particularly strong position to benefit from on-prem database hosting. The marginal cost of adding database capacity to existing infrastructure is much lower than it would be for an organization starting from scratch. For these teams, the cloud's selling point of "no infrastructure to manage" is less compelling, because the infrastructure already exists and the people to run it are already on staff.</p>    <p>Data volume is another factor. Very large databases (multi-terabyte or petabyte-scale) can generate storage and egress costs in the cloud that dwarf the cost of equivalent on-prem storage hardware. At sufficient scale, buying and managing your own storage is simply cheaper, even accounting for the overhead of doing so.</p>    <h1 class="blog-sub-title">Reducing Complexity and Regaining Cost Control with Navicat On-Prem Server 3.1</h1>    <p>One of the less obvious contributors to rising database costs in cloud environments is the fragmentation of tooling and access management. As teams grow, it's common to layer multiple services for user management, collaboration, monitoring, and query workflows, each adding incremental cost and operational complexity. This is where solutions like <a class="default-links" href="https://www.navicat.com/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server 3.1</a> fit naturally into an on-premise or hybrid strategy.</p>    <p>By centralizing database access, user permissions, and collaborative workflows within your own infrastructure, Navicat On-Prem Server 3.1 helps reduce reliance on multiple cloud-based tools and subscriptions. Teams can manage queries, share connections, and control access from a single platform without incurring ongoing per-user or usage-based cloud fees. This aligns particularly well with organizations already operating on-prem systems, where predictability and cost containment are key priorities.</p>    <p>There is also a data locality advantage. Keeping database management and access layers within the same environment as the data itself minimizes unnecessary data movement, which in turn helps avoid the egress charges that often accumulate in cloud-heavy architectures. Over time, these incremental savings can be meaningful, especially for data-intensive workloads.</p>    <p>In this sense, tools like Navicat On-Prem Server 3.1 are not just operational conveniences; they are part of a broader strategy to simplify architecture, consolidate tooling, and bring database-related costs back under direct organizational control.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>Neither hosting model is universally cheaper. The right answer depends on your workload characteristics, your existing infrastructure, your team's capabilities, and your organization's financial preferences around capital versus operating expenditure. The important thing is to make that comparison honestly, with all the costs on the table, rather than letting the initial simplicity of cloud pricing obscure what you'll actually be paying once your system is running at scale.</p>  </body></html>]]></description>
</item>
<item>
<title>How AI Code Completion Is Changing the Way DBAs Write SQL</title>
<link>https://www.navicat.com/company/aboutus/blog/3786-how-ai-code-completion-is-changing-the-way-dbas-write-sql.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en">  <head>    <title>How AI Code Completion Is Changing the Way DBAs Write SQL</title>  </head>  <body>    <b>Mar 20, 2026</b> by Robert Gravelle<br/><br/>    <p>For most of its history, writing SQL has been a largely manual craft. A database administrator or developer would pull up a query editor, recall the relevant table names and column definitions from memory or (more likely!) a schema diagram, and construct statements piece by piece. Syntax errors were caught at execution time. Optimization was a separate, deliberate step. Now, AI-powered code completion is beginning to change that workflow in meaningful ways - not by replacing the human (at least, not yet!), but by compressing the distance between intent and working query.</p>        <h1 class="blog-sub-title">What AI Code Completion Actually Does</h1>    <p>Traditional code completion, i.e., the kind that has been in database IDEs for years, works by pattern-matching against known SQL syntax and object names in the connected schema. It can suggest a table name after you type FROM, or complete a column name once it recognizes the context. Useful, but fundamentally mechanical.</p>    <figure>      <figcaption>Auto-completion in Navicat 17</figcaption>      <img alt="code_completion (31K)" src="https://www.navicat.com/link/Blog/Image/2026/20260320/code_completion.jpg" width="515" />    </figure>    <p>AI-powered completion goes further. Rather than just predicting the next token based on syntax rules, it understands intent. You can describe what you want in plain language, for example, "find all customers who placed more than three orders in the last 90 days", and the AI can generate a complete, structurally sound SQL statement. It can also suggest how to rewrite a subquery as a JOIN, flag a missing index condition, or explain why a particular query might perform poorly at scale. The difference is less about autocomplete and more about having a knowledgeable collaborator available at the point of writing.</p>    <h1 class="blog-sub-title">The Practical Impact on DBA Workflows</h1>    <p>The most immediate benefit of AI-powered completion is speed. Routine queries such as aggregations, filtered selects, and common JOIN patterns that would take a few minutes to write carefully can often be scaffolded in seconds, leaving the DBA to focus on reviewing and refining rather than constructing from scratch. For less experienced team members, this is particularly valuable: AI suggestions provide a working starting point and implicitly model good query structure, which accelerates learning in a way that blank-editor writing does not.</p>    <p>There are also gains in consistency. When multiple developers are working across the same schema, AI tools can help enforce consistent patterns for things like date filtering, NULL handling, and aggregation logic, hence reducing the subtle variability that tends to creep into large SQL codebases over time.</p>    <p>That said, AI-generated SQL still requires careful human review. The output is only as good as the context provided, and models can confidently produce queries that are syntactically valid but semantically wrong - joining on the wrong key, filtering on the wrong column, or missing a critical business rule that the AI had no way of knowing. The DBA's judgment remains indispensable; AI assistance changes where that judgment is applied, not whether it's needed.</p>    <h1 class="blog-sub-title">AI Features in Navicat On-Prem Server 3.1</h1>    <p><a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server 3.1</a>, released in February 2026, brought AI Assistant and Ask AI into the on-premise collaboration platform for the first time - making these capabilities available to teams who manage their database infrastructure entirely within their own network.</p>    <p>The AI Assistant provides a conversational interface directly within the platform where users can ask questions and receive immediate answers. This is particularly useful for query writing and explanation tasks: a team member can describe what they're trying to retrieve, ask the assistant to explain an unfamiliar query written by a colleague, or get guidance on SQL syntax without leaving the tool they're already working in.</p>    <figure>      <figcaption>AI Assistant in Navicat On-Prem Server 3.1</figcaption>      <img alt="ai_assistant_new_chat (30K)" src="https://www.navicat.com/link/Blog/Image/2026/20260320/ai_assistant_new_chat.jpg" width="340" />    </figure>      <p>Ask AI is oriented more toward specific, action-driven tasks in the query editor. Users can invoke it to explain, optimize, format, or convert SQL queries, covering some of the most common tasks that slow down query development. Frequently used actions can be pinned for quick access, which makes the feature practical for day-to-day use rather than something you have to dig for when you need it.</p>    <figure>      <figcaption>Ask AI in Navicat On-Prem Server 3.1</figcaption>      <img alt="ask_ai_suggest_code (32K)" src="https://www.navicat.com/link/Blog/Image/2026/20260320/ask_ai_suggest_code.jpg" width="535" />    </figure>    <h1 class="blog-sub-title">Conclusion</h1>    <p>AI code completion isn't replacing the DBA, it's changing the shape of the job. The cognitive load shifts away from syntax recall and boilerplate construction toward higher-order tasks: validating AI output, making architectural decisions, and applying the business context that no model can infer on its own. For teams willing to adapt their workflows thoughtfully, these tools represent a genuine productivity gain. The challenge, as with most AI tooling, is learning where to trust the output and where to intervene - and that judgment, for now, remains entirely human.</p>  </body></html>]]></description>
</item>
<item>
<title>Role-Based Access Control in Database Environments: Getting It Right</title>
<link>https://www.navicat.com/company/aboutus/blog/3579-role-based-access-control-in-database-environments-getting-it-right.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en">  <head>    <title>Role-Based Access Control in Database Environments: Getting It Right</title>  </head>  <body><b>Mar 13, 2026</b> by Robert Gravelle<br/><br/>    <p>Every database holds data that some people only need to view, some need to modify, and others should never touch at all. Role-Based Access Control - commonly referred to as RBAC - is the framework that makes that distinction enforceable. When it's implemented well, it reduces security risk, simplifies auditing, and makes it far easier to manage access as teams grow and change. When it's implemented poorly, it tends to collapse into either over-permissioning (everyone can do everything) or under-permissioning (nobody can do what they need to). Getting it right requires more than just knowing the theory.</p>        <h1 class="blog-sub-title">What RBAC Actually Means in a Database Context</h1>    <p>At its core, RBAC is the practice of assigning permissions to roles rather than directly to individual users. A user is then granted access by being assigned to one or more roles. This indirection is what makes the system scalable: when a job function changes, you update the role once rather than hunting down every individual account that performs that function.</p>    <p>In a database environment, roles typically map to actions like reading data, writing or modifying data, managing schema objects (creating or dropping tables, indexes, and so on), and administering users and permissions themselves. Most production database systems, like MySQL, PostgreSQL, SQL Server, and Oracle, have native support for role-based privilege management, though the implementation details vary considerably between them.</p>    <h1 class="blog-sub-title">The Principle of Least Privilege</h1>    <p>The single most important design principle behind any sound RBAC implementation is least privilege: every user and every role should have the minimum level of access necessary to perform their intended function, and nothing more. This sounds straightforward but is frequently violated in practice, often for convenience. A developer who needs read access to a production database to debug an issue gets granted full read-write access because it's faster to set up. A contractor who needs access to one schema gets access to the entire server. Over time, these shortcuts accumulate into a permissions structure that nobody fully understands.</p>    <p>Least privilege also applies horizontally, not just vertically. A role that needs access to one database shouldn't have it granted at the server level. A role that needs to read from three tables shouldn't have SELECT privileges on the entire schema. Precision matters, both for security and for auditability.</p>    <h1 class="blog-sub-title">Designing Roles Before Assigning Them</h1>    <p>A common mistake is to treat RBAC as something you configure reactively - adding permissions when someone asks for access, removing them when something goes wrong. The more reliable approach is to design your role taxonomy upfront, based on the actual job functions that interact with your databases.</p>    <p>Start by identifying the distinct categories of users: read-only analysts, application service accounts, developers, DBAs, security auditors, and so on. For each category, define exactly what operations they need to perform and on which objects. Then model your roles to match those categories, keeping roles focused and non-overlapping where possible. A user who performs multiple functions can be assigned multiple roles, but each role should be coherent on its own.</p>    <p>It's also worth distinguishing between roles that exist at the database engine level (the privileges assigned within MySQL, PostgreSQL, and so on) and roles that exist at the tooling or collaboration layer, where teams manage shared objects like queries, connection configs, and data models. Both layers need governance.</p>    <h1 class="blog-sub-title">Managing Access in Navicat On-Prem Server</h1>    <p>For teams using <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a> as their database collaboration platform, access control is managed at the project level through a straightforward three-tier role system. When adding a member to a project, administrators assign one of three access rights that determine what that member can do within the project:</p>    <p><strong>Can Manage and Edit</strong> is the highest level of access. Members with this right can read and interact with all objects in the project, create and modify objects, manage project membership (adding or removing other members and adjusting their roles), and rename the project itself. This right is appropriate for project leads, senior DBAs, or anyone who needs administrative control over the collaboration workspace.</p>    <p><strong>Can Edit</strong> grants full read and write access to project objects. Members can view and modify shared content, but stops short of membership management and project renaming. This is well-suited to active contributors who need to create and update queries, connection settings, or other shared resources, but who shouldn't have authority over the project's structure or membership.</p>    <p><strong>Can View</strong> is a read-only role. Members can access and view objects within the project but cannot make changes to any of them. This is the appropriate choice for stakeholders, auditors, or team members who need visibility into shared resources without the ability to alter them.</p>    <p>This model maps cleanly onto the principle of least privilege: access is scoped specifically to collaboration objects within the platform, and the three tiers cover the most common real-world access patterns without creating unnecessary complexity. It also complements, rather than replaces, the underlying database-level permissions managed within individual database engines; the two layers of access control work together.</p>    <h1 class="blog-sub-title">Keeping Access Control Maintainable Over Time</h1>    <p>RBAC implementations tend to drift. People change roles, projects end, contractors leave, and permissions that were set up temporarily become permanent through neglect. Building in a regular review cadence (quarterly is common) helps keep your permission structure clean. Automated tooling that reports on unused roles, dormant accounts, or privilege escalations can surface problems before they become incidents.</p>    <p>Documentation matters too. When roles are well-documented with clear statements of purpose, who should hold them, and what they grant access to, it becomes much easier for new administrators to maintain the system correctly and for auditors to verify it. An RBAC setup that only one person fully understands is a fragile one.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>Role-based access control isn't a configuration you set once and forget. It's an ongoing practice that reflects your organization's structure, security posture, and operational needs. The core principles - least privilege, role-based rather than user-based assignment, and regular review - apply whether you're managing privileges in a database engine directly or governing access to a shared collaboration platform like <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a>.</p>  </body></html>]]></description>
</item>
<item>
<title>On-Prem vs. Cloud Database Hosting: How to Choose the Right Approach for Your Organization</title>
<link>https://www.navicat.com/company/aboutus/blog/3577-on-prem-vs-cloud-database-hosting-how-to-choose-the-right-approach-for-your-organization.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en">  <head>    <title>On-Prem vs. Cloud Database Hosting: How to Choose the Right Approach for Your Organization</title>  </head>  <body>   <b>Mar 10, 2026</b> by Robert Gravelle<br/><br/>    <p>When it comes to hosting your databases and the tools that manage them, the choice between on-premise and cloud-based infrastructure is rarely as simple as it looks. Both models have matured considerably over the past decade, and the right answer almost always depends on the specific circumstances of your organization, as opposed to any universal rule of thumb.</p>         <h1 class="blog-sub-title">What "On-Prem" and "Cloud" Actually Mean</h1>    <p>On-premise (on-prem) database hosting means your databases and management infrastructure run on servers you own and physically control, typically within your own data center or office network. The cloud, by contrast, means delegating that infrastructure to a third-party provider - AWS, Azure, Google Cloud, and so on - who hosts and maintains the underlying hardware on your behalf.</p>    <p>A third option, the hybrid model, sits between them: some data and workloads remain on-prem while others move to the cloud. This is increasingly common in large enterprises that have legacy systems they can't easily migrate.</p>     <h1 class="blog-sub-title">The Case for Cloud Hosting</h1>    <p>Cloud database hosting has surged in popularity for good reasons. It eliminates the capital expenditure of buying servers, reduces the operational burden of managing hardware, and makes it trivially easy to scale up or down based on demand. For startups, small teams, or projects with variable workloads, the cloud's pay-as-you-go model is genuinely compelling.</p>    <p>The cloud is also attractive for distributed teams. If your engineers, DBAs, and analysts are spread across different cities or time zones, having your database infrastructure in the cloud makes collaboration simpler and doesn't require VPN tunneling or complex firewall rules to give everyone access.</p>     <h1 class="blog-sub-title">The Case for On-Prem Hosting</h1>    <p>The cloud isn't the right fit for everyone. Organizations in regulated industries such as healthcare, finance, government, and legal often operate under compliance requirements (HIPAA, GDPR, PCI-DSS, SOX) that impose strict controls over where data physically resides and who can access it. For these organizations, keeping databases on-prem isn't a preference; it's often a legal or contractual obligation.</p>    <p>Beyond compliance, on-prem hosting gives you complete control over your security posture, network configuration, and upgrade schedules. You're not subject to a provider's maintenance windows, pricing changes, or service outages. For organizations with steady, predictable workloads and an in-house IT team to manage infrastructure, on-prem can also be significantly more cost-effective in the long run than paying ongoing cloud subscription fees.</p>     <h1 class="blog-sub-title">Tools That Bridge the Gap</h1>    <p>One of the more interesting developments in the database tooling space is the emergence of products designed specifically to give teams the collaboration benefits of the cloud while keeping data entirely on-premise. <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a> is a good example of this philosophy in practice.</p>    <p>Rather than forcing a choice between collaboration and data sovereignty, it lets organizations host their own private server that all Navicat desktop clients can connect to for real-time team collaboration. Team members can share connection settings, queries, code snippets, data models, and BI workspaces through a centralized hub they control entirely, without any data ever leaving their own network.</p>    <p>The most recent release, version 3.1 (February 2026), adds AI Assistant integration to the platform, including "Ask AI" features directly within the server environment. This brings AI-assisted query writing and code generation into an on-prem context - an important step for organizations that want the productivity benefits that AI tooling offers. The platform already supported MySQL, MariaDB, and PostgreSQL (including Fujitsu Enterprise Postgres) since version 3.0, along with an enhanced query editor featuring code completion, folding, and SQL beautification.</p>     <h1 class="blog-sub-title">Key Questions to Guide Your Decision</h1>    <p>If you're evaluating which model suits your organization, a few questions tend to cut through the noise quickly. Does your industry have data residency requirements that restrict where your data can be stored? If yes, on-prem or a private cloud is likely non-negotiable. Do you have the internal IT staff to manage and maintain database servers? If no, managed cloud services may reduce operational risk. Is your workload predictable or highly variable? Unpredictable, spiky workloads generally favor cloud elasticity, while steady workloads favor on-prem economics. And finally, how important is team collaboration across distributed locations? If real-time sharing is critical, make sure any on-prem solution you choose - like <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a> - is built to support it natively.</p>    <p>Ultimately, the right answer isn't about which model is objectively better. It's about which one aligns with your compliance obligations, your team's capabilities, and your organization's tolerance for infrastructure risk and cost.</p>  </body></html>]]></description>
</item>
<item>
<title>Getting Started with AI Assistants in Navicat On-Prem Server 3.1</title>
<link>https://www.navicat.com/company/aboutus/blog/3575-getting-started-with-ai-assistants-in-navicat-on-prem-server-3-1.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Getting Started with AI Assistants in Navicat On-Prem Server 3.1</title></head><body><b>Mar 6, 2026</b> by Robert Gravelle<br/><br/><p>Navicat's latest <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">On-Prem Server</a> (3.1) is bringing AI assistance to database management in a big way. In fact, two of its three new features feature AI: there's a general purpose AI Assistant as well as a more specialized Ask AI tool aimed at SQL development. Both of these rely on APIs of popular AI models. In today's blog article, we'll learn how easy it is to get started with AI Assistants so that your team can benefit from the power of AI guidance.</p><h1 class="blog-sub-title">A Brief Introduction to AI APIs</h1><p>AI APIs (Application Programming Interfaces) are services that allow developers to access AI capabilities over the internet without having to build or host the underlying AI models themselves. Instead of training your own large language model - which requires enormous computing resources and expertise - you simply send a request to an AI API and receive an intelligent response back in seconds. </p><p>The use cases for AI APIs are remarkably broad, which is part of why they have become so popular. They power natural language processing tasks like answering questions, summarizing documents, translating languages, and analyzing sentiment. In software development, they enable code assistance features that can generate, explain, debug, and optimize code - exactly like Navicat's Ask AI tool does for SQL. Beyond development, AI APIs are widely used for content generation, data extraction, chatbots, virtual assistants, and image recognition.</p><p>This is why configuring the AI Assistant in Navicat involves selecting and connecting to an AI API provider, essentially giving the software its "brain." So let's move on to that now!</p><h1 class="blog-sub-title">Adding an AI Assistant</h1><p>All of the details pertaining to AI Assistants are located on the Database Management Settings screen. It's available by clicking the drop-down that is associated with your Profile Name:</p><img alt="database_management_settings_menu_item (21K)" src="https://www.navicat.com/link/Blog/Image/2026/20260306/database_management_settings_menu_item.jpg" height="382"/><p>On the Database Management Settings screen, click the AI button at the top of the dialog to display all of the AI Assistant details. At first it will be mostly empty because we have not yet selected and AI Model. To do that, click the plus (+) icon at the bottom of the AI Assistant list:</p><img alt="add_ai_assistant_button (49K)" src="https://www.navicat.com/link/Blog/Image/2026/20260306/add_ai_assistant_button.jpg" height="679"  /><p>Doing so will open a menu containing all of the available AI model providers. All of the most popular providers are supported, including Anthropic Claude, Google Gemini, DeepSeek, Grok, and more.</p><img alt="ai_assistant_list (12K)" src="https://www.navicat.com/link/Blog/Image/2026/20260306/ai_assistant_list.jpg" height="284"/><p>Once you've selected an AI model provider, a number of fields will appear to the right of the AI Assistant list. These will vary depending on the provider you choose. For example, Claude supports a max_tokens parameter which we see on the screen as "Max Tokens". The right number for Max Tokens depends on what you're using Claude for, but here are some guidelines:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">  <li>Simple confirmations/classifications: 256-512 tokens</li>  <li>Chat/Q&amp;A: 1024-2048 tokens is usually plenty</li>  <li>Code generation: 4096-8192 tokens for complex functions</li>  <li>Writing/content creation: 4096-8192 tokens</li></ul><p>Important things to know:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">  <li>max_tokens is the maximum Claude can generate - it will stop when it's finished, even if it hasn't reached the limit</li>  <li>1 token &asymp; 4 characters or roughly 0.75 words in English, so 1024 tokens &asymp; 750-800 words</li>  <li>The maximum varies by model - Claude models support up to 8192 output tokens (though the total context window is much larger)</li>  <li>You're only charged for tokens actually used, not the max you specify</li></ul><p>If you want to limit the length of responses, start with 4096 tokens - this gives Claude plenty of room for detailed responses without being unnecessarily high. If you find responses are getting cut off, increase it. If you know you only need brief answers, you can lower it to save a bit on costs. Remember that setting it higher doesn't cost you more unless Claude actually uses those tokens, so it's often safe to be generous with this parameter.</p><h3>Obtaining an API Key</h3><p>Regardless of which AI provider you choose, the API Key is the most crucial piece of information that you'll need to provide. You'll need to obtain an API key from your chosen AI provider. An API key is essentially a unique, auto-generated password that identifies your application and grants it access to the AI service. When Navicat On-Prem Server sends a request to an AI API, it includes this key as part of the request, allowing the provider to verify your identity, track your usage, and apply the appropriate billing charges.</p> <p>API keys are obtained by registering for an account on your chosen provider's platformsuch as Anthropic's Console at <a class="default-links" href="https://console.anthropic.com/" target="_blank">console.anthropic.com</a> or OpenAI's platform at <a class="default-links" href="https://platform.openai.com/" target="_blank">platform.openai.com</a>  and are typically generated instantly on demand. Since an API key provides direct access to a paid service, it should be treated like a password: stored securely, never shared publicly, and never embedded in publicly accessible code. Most providers allow you to generate multiple keys for different applications or team members, and to revoke them instantly if they are ever compromised.</p><h3>Choosing a Model</h3><p>The most important decision you will need to make - next to the provider - is what model to use. Most AI providers offer several to choose from. Clicking the ellipsis (...) to the right of the Model textbox opens a dialog where you can select a model. Here are the choices for Claude: </p><img alt="select_model_dialog (51K)" src="https://www.navicat.com/link/Blog/Image/2026/20260306/select_model_dialog.jpg" height="743"/><p>It's worth noting that the cost for using different models can vary significantly. Consult the provider's model overview page for exact pricing information. For example, <a class="default-links" href="https://platform.claude.com/docs/en/about-claude/models/overview" target="_blank">here</a> is the overview page for Claude.</p><h3>Putting It All Together</h3><p>Once you've filled in all of the necessary information, you can test the API service by clicking the "Test Connection" button. If all goes well, you should see a "Connection Successful!" message at the top of the screen:</p><img alt="connection_test_success_message (58K)" src="https://www.navicat.com/link/Blog/Image/2026/20260306/connection_test_success_message.jpg" height="709" /><p>You're now ready to use both <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">On-Prem Server 3.1</a>'s AI Assistant and Ask AI tools!</p></body></html>]]></description>
</item>
<item>
<title>SQL vs. NoSQL: Choosing the Best Fit for Your Project</title>
<link>https://www.navicat.com/company/aboutus/blog/3573-sql-vs-nosql-choosing-the-best-fit-for-your-project.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>SQL vs. NoSQL: Choosing the Best Fit for Your Project</title></head><body>   <b>Mar 3, 2026</b> by Robert Gravelle<br/><br/>        <p>Choosing between SQL and NoSQL databases is one of the most critical architectural decisions you'll make in any project. While the industry hype cycle has swung wildly between championing relational databases and promoting NoSQL as the future, the reality is that each approach serves distinct purposes. Making the right choice requires understanding your specific requirements rather than following trends.</p>        <h1 class="blog-sub-title">Understanding the Core Differences</h1>        <p>SQL databases like MySQL, PostgreSQL, and SQL Server organize data into structured tables with predefined schemas and relationships. They excel at maintaining data integrity through ACID properties, making them ideal for applications where consistency is paramount. Meanwhile, NoSQL databases such as MongoDB and Redis take varied approaches, storing data as documents, key-value pairs, or graphs without rigid schemas. This flexibility allows them to scale horizontally and handle rapidly changing data structures.</p>        <h1 class="blog-sub-title">When SQL Makes Sense</h1>        <p>Traditional relational databases remain the optimal choice when your data has clear relationships and structure. Financial applications, e-commerce platforms with complex transactions, and systems requiring robust reporting capabilities benefit from SQL's powerful join operations and transaction guarantees. If your application needs strong consistency, complex queries across multiple tables, or regulatory compliance with strict data integrity requirements, SQL databases provide proven, reliable solutions.</p>        <h1 class="blog-sub-title">When NoSQL Shines</h1>        <p>NoSQL databases excel in scenarios requiring massive scale, high write throughput, or flexible data models. Real-time analytics platforms, content management systems with diverse data types, IoT applications processing millions of sensor readings, and mobile apps requiring offline sync capabilities often perform better with NoSQL. The ability to evolve your schema without migrations and to distribute data across multiple servers makes NoSQL particularly attractive for rapidly growing applications.</p>        <h1 class="blog-sub-title">The Hybrid Reality</h1>        <p>Many modern applications don't fit neatly into either category. You might use PostgreSQL for transactional data while employing Redis for caching and session management, or combine SQL Server with MongoDB for handling both structured customer records and unstructured product catalogs. This approach - known as "polyglot persistence" - leverages each database type's strengths.</p>        <h1 class="blog-sub-title">Managing Both SQL and NoSQL with Navicat</h1>        <p>Navicat eliminates the complexity of working across different database types. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> provides a unified interface for managing both SQL databases including MySQL, PostgreSQL, MariaDB, SQL Server, Oracle, SQLite, and Snowflake, alongside NoSQL systems like MongoDB and Redis, all within a single application. This means developers and database administrators can switch between relational and NoSQL databases without learning multiple management tools.</p>        <p>The platform's visual query builder works seamlessly across different database types, while features like data modeling, synchronization, and backup operate consistently whether you're working with SQL tables or NoSQL collections. Navicat's support for MongoDB includes schema visualization and aggregation pipeline builders, while its Redis integration provides intuitive interfaces for key-value operations. This unified approach proves invaluable when implementing hybrid architectures, allowing teams to design, develop, and maintain complex data ecosystems efficiently.</p>        <h1 class="blog-sub-title">Making Your Decision</h1>        <p>Choose based on your actual requirements, not industry buzz. Consider your data structure, consistency needs, scalability requirements, and team expertise. Remember that you're not locked into a single choice forever. Start with the database that best fits your current needs, and use tools like <a class="default-links"  href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> to manage complexity as your architecture evolves.</p></body></html>]]></description>
</item>
<item>
<title>What Metrics Actually Matter in Database Monitoring</title>
<link>https://www.navicat.com/company/aboutus/blog/3560-what-metrics-actually-matter-in-database-monitoring.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>What Metrics Actually Matter in Database Monitoring</title></head><body><b>Feb 27, 2026</b> by Robert Gravelle<br/><br/><p>For years, many organizations have relied on simple uptime checks to gauge database health. While knowing your database is running is certainly important, uptime alone tells you almost nothing about performance, efficiency, or the user experience. A database can technically be "up" while delivering painfully slow queries, suffering from resource contention, or teetering on the edge of capacity exhaustion. Modern database monitoring requires a more sophisticated approach that focuses on metrics that actually impact your applications and users.</p><h1 class="blog-sub-title">Query Performance Metrics</h2><p>The most critical area to monitor is query performance, since queries are where your database directly interacts with your applications. Long-running queries are often the canary in the coal mine for deeper problems. By tracking query execution times, you can identify which specific queries are consuming excessive resources and causing bottlenecks. Equally important is understanding query wait times, which reveal what your queries are waiting for, whether that's disk access, locks, or network resources.</p><p>Beyond execution time, examining the top queries by CPU usage helps you identify which operations are most computationally expensive. Similarly, tracking queries by the number of reads and writes they perform can highlight inefficient data access patterns that might benefit from index optimization or query refactoring. These metrics transform abstract performance concerns into concrete, actionable insights.</p><h1 class="blog-sub-title">Resource Utilization and Capacity</h2><p>While CPU and memory usage might seem like basic metrics, understanding them in context is crucial. CPU utilization patterns tell you whether your database server has adequate processing power for your workload, but more importantly, sustained high CPU usage can indicate missing indexes or poorly optimized queries rather than simply insufficient hardware.</p><p>Memory metrics deserve particular attention because databases rely heavily on caching to achieve good performance. The buffer cache hit ratio, which measures the percentage of data requests served from memory rather than disk, should typically exceed 90 percent. When this ratio drops, it indicates that your database is frequently going to disk for data, dramatically slowing performance. Monitoring memory allocation over time also helps with capacity planning, showing you whether your database's memory footprint is growing at a sustainable rate.</p><p>Disk I/O metrics complete the resource picture. Tracking disk read and write operations per second, along with average disk response times, helps you understand whether storage is becoming a bottleneck. Network I/O is equally important for understanding how much data is flowing between your database and applications.</p><h1 class="blog-sub-title">Connection and Session Activity</h2><p>Monitoring active connections and session details provides visibility into how your applications are actually using the database. Tracking current user connections helps you understand your concurrent workload and can alert you to connection pool exhaustion before it causes application failures. Monitoring connection patterns over time also reveals usage trends that inform capacity planning decisions.</p><p>Lock monitoring is particularly critical for understanding contention issues. When queries are waiting for locks held by other sessions, users experience delays that simple CPU or memory metrics won't explain. By tracking both the locks currently held and sessions waiting for locks, you can identify problematic transaction patterns or long-running transactions that are blocking other work.</p><h1 class="blog-sub-title">Measuring These Metrics with Navicat Monitor</h2><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a> provides an agentless architecture for monitoring MySQL, MariaDB, PostgreSQL, and SQL Server databases, which means you don't need to install software on your database servers themselves. The tool collects metrics at regular intervals and stores them in a repository database for historical analysis and trending.</p><p>For query performance monitoring, Navicat Monitor's Long Running Queries chart visualizes top queries based on execution duration, wait types, CPU usage, and read/write operations. This allows you to quickly identify problematic queries and drill down into their execution characteristics. The tool maintains historical data so you can track whether query performance is degrading over time.</p><p>Resource monitoring in Navicat Monitor covers the full spectrum of system metrics. It collects CPU load, RAM usage, and various other system resources over SSH or SNMP, giving you visibility into both database-level and operating system-level performance. The interactive dashboard provides real-time and historical graphs showing server load, disk usage, network I/O, and table locks, making it easy to correlate different metrics and identify patterns.</p><p>One particularly powerful feature is the custom metrics capability. You can write your own queries to collect performance metrics for specific instances and receive alerts when values exceed defined thresholds. This means you can monitor business-specific indicators or specialized performance characteristics that matter to your particular applications, going well beyond the standard preset metrics.</p><img alt="New_Custom_Metrics_screen_details (55K)" src="https://www.navicat.com/link/Blog/Image/2026/20260227/New_Custom_Metrics_screen_details.jpg" height="728" width="714" /><p>The alerting system in Navicat Monitor enables proactive management by notifying you when metrics cross configurable thresholds. You can set alerts for any metric, including custom ones, and define both the threshold value and how long it must be exceeded before triggering an alert. Notifications can be delivered via email, SMS, SNMP, or Slack, ensuring your team knows about problems before they impact users. The tool provides detailed alert analysis that includes metric charts, timelines, and historical context to help with root cause analysis.</p><h1 class="blog-sub-title">Beyond the Dashboard: Making Metrics Actionable</h2><p>Collecting metrics is only the first step. The real value comes from understanding patterns, setting appropriate baselines, and creating actionable alerts. Rather than simply watching dashboards, establish normal ranges for your key metrics based on historical data and workload patterns. This allows you to set intelligent alert thresholds that catch genuine problems without generating false alarms from normal variations.</p><p>Consider the relationships between metrics when investigating issues. A spike in disk I/O might correlate with a drop in buffer cache hit ratio and an increase in query execution times. Understanding these connections helps you identify root causes rather than just symptoms. Regular capacity planning reviews using historical trends ensure you can scale proactively before hitting resource constraints.</p><p>Moving from simple uptime monitoring to comprehensive performance monitoring will significantly impact how you understand and manage your databases. By focusing on metrics that directly impact application performance and user experience, you can move from reactive fire-fighting to proactive optimization, ensuring your databases deliver consistent, reliable performance.</p></body></html>]]></description>
</item>
<item>
<title>A Practical Guide to Database Transaction Isolation Levels</title>
<link>https://www.navicat.com/company/aboutus/blog/3558-a-practical-guide-to-database-transaction-isolation-levels.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Practical Guide to Database Transaction Isolation Levels</title></head><body><b>Feb 24, 2026</b> by Robert Gravelle<br/><br/><p>Every modern application that stores data faces a fundamental challenge: how do you let multiple users work with the same database at the same time without their actions corrupting each other's data? Without proper safeguards, concurrent operations could produce incorrect results, duplicate transactions, or delete crucial information. Database transaction isolation levels exist to solve concurrency issues, giving you a toolkit of different strategies for managing concurrent access. Each isolation level represents a different answer to the question of how much transactions should be aware of and affected by each other's work. As you'll discover in this article, choosing the right isolation level means understanding the trade-offs between data accuracy, system performance, and the types of anomalies you're willing to accept in your application.</p><h1 class="blog-sub-title">What Are Transaction Isolation Levels?</h1><p>When multiple users access a database simultaneously, transactions can interfere with each other in unexpected ways. Transaction isolation levels determine how much one transaction can see or be affected by changes made by other concurrent transactions. It's helpful to think of isolation levels as different approaches to balancing two competing needs: maintaining data accuracy and allowing multiple people to work with the database at the same time. Higher isolation levels provide stronger guarantees about data consistency but can slow down your system, while lower levels offer better performance at the cost of potential data anomalies.</p><h1 class="blog-sub-title">Read Uncommitted: The Lowest Protection</h1><p>Read Uncommitted is the most permissive isolation level, where transactions can read data that other transactions have modified but not yet saved permanently. This approach prioritizes speed over accuracy. In this mode, you might encounter dirty reads, where your transaction sees changes that could be rolled back moments later. Imagine checking a bank account balance while someone else is transferring money out of it. You might see the reduced balance even though that transfer could fail and be reversed. Read Uncommitted is rarely appropriate for production systems, though it might be acceptable for generating rough reports where perfect accuracy is less critical than speed.</p><h1 class="blog-sub-title">Read Committed: The Common Default</h1><p>Read Committed prevents dirty reads by ensuring transactions only see data that has been permanently saved by other transactions. This is the default isolation level for most database systems, striking a reasonable balance between performance and reliability. However, Read Committed still allows non-repeatable reads. If you read the same row twice within your transaction, you might get different values if another transaction modified and committed that data between your reads. This level works well for many everyday applications where you need reliable data but can tolerate some changes happening during your transaction.</p><h1 class="blog-sub-title">Repeatable Read: Maintaining Consistency</h1><p>Repeatable Read goes further by guaranteeing that if you read a row once in your transaction, you'll get the same values if you read it again, even if other transactions are making changes. The database accomplishes this by holding locks on the data you've read until your transaction completes. This prevents other transactions from modifying that specific data. However, Repeatable Read can still experience phantom reads, where new rows matching your query criteria appear between reads. For instance, if you count all orders over one hundred dollars and then count again, new qualifying orders inserted by other transactions might appear in your second count, changing your result.</p><h1 class="blog-sub-title">Serializable: Maximum Isolation</h1><p>Serializable represents the strictest isolation level, making transactions behave as if they're running one after another in sequence rather than simultaneously. This level prevents all the anomalies that plague lower isolation levels, including dirty reads, non-repeatable reads, and phantom reads. The database achieves this by acquiring range locks that prevent other transactions from inserting, updating, or deleting data that could affect your queries. While Serializable provides the strongest data consistency guarantees, it significantly reduces how many transactions can run simultaneously, which can impact system performance. This level is essential for critical operations like financial transactions where even small inconsistencies are unacceptable.</p><h1 class="blog-sub-title">Working with Isolation Levels in Navicat</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> serves as an indispensable graphical interface for managing your database transactions and isolation levels. When you open a query window in Navicat, you're working directly with your database server, and Navicat provides a convenient way to execute the SQL commands that control isolation levels. You can set the isolation level for your current session by running standard SQL commands in the query editor. For example, in SQL Server you would execute SET TRANSACTION ISOLATION LEVEL READ COMMITTED, while in MySQL you might use SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ. Navicat faithfully sends these commands to your database, allowing you to experiment with different isolation levels and observe their effects.</p><p>The real power of using Navicat for understanding isolation levels comes from its ability to open multiple query windows simultaneously. You can set different isolation levels in separate windows, then run transactions in parallel to see how they interact. This hands-on approach helps you understand the practical differences between isolation levels. For instance, you could demonstrate a phantom read by setting one window to Repeatable Read and another to Read Committed, then inserting rows in one window while querying in the other. While Navicat doesn't enforce or manage isolation levels itself, since that's the database server's responsibility, it provides an accessible environment for learning and testing how different isolation configurations affect your data operations.</p><h1 class="blog-sub-title">Conclusion</h1><p>Transaction isolation levels give you control over how your database handles concurrent access, with each level offering a different balance between data consistency and performance. By understanding the trade-offs between Read Uncommitted, Read Committed, Repeatable Read, and Serializable, you can make informed decisions about which level best serves your application's needs. Whether you're building financial systems that demand perfect accuracy or reporting tools that prioritize speed, choosing the right isolation level is essential for creating reliable, efficient database applications.</p></body></html>]]></description>
</item>
<item>
<title>Database Connection Pooling Explained</title>
<link>https://www.navicat.com/company/aboutus/blog/3556-database-connection-pooling-explained.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Database Connection Pooling Explained</title></head><body><b>Feb 18, 2026</b> by Robert Gravelle<br/><br/><p>When your application needs to talk to a database, it must first establish a connection. This process might seem instantaneous from a user's perspective, but behind the scenes, it involves several time-consuming steps: the database server must authenticate credentials, allocate memory for the connection, and set up communication channels. If your application creates a new connection for every database query and then closes it immediately afterward, you're essentially forcing the system to repeat this expensive setup process hundreds or thousands of times per second.</p><p>Connection pooling offers an elegant solution to this inefficiency by creating a reservoir of pre-established connections that your application can reuse, dramatically reducing overhead and improving performance. Instead of constantly opening and closing connections, your application simply borrows a connection from the pool when needed and returns it when finished, allowing that same connection to serve many subsequent requests.</p><h1 class="blog-sub-title">Why Connection Pooling Matters</h1><p>The performance benefits of connection pooling can be quite substantial. Here's why: establishing a new database connection typically takes between 50 to 100 milliseconds, which might not sound like much until you multiply it across thousands of requests. With connection pooling, your application can handle significantly more concurrent users because it's not wasting time and resources constantly creating and destroying connections. Additionally, connection pools protect your database server from being overwhelmed by too many simultaneous connections, which could cause it to slow down or even crash.</p><h1 class="blog-sub-title">How to Configure Connection Pooling</h1><p>The configuration of a connection pool requires careful attention to several key parameters: </p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">  <li>The minimum pool size determines how many connections remain open even when your application is idle, ensuring that some connections are always ready when traffic picks up.</li>   <li>The maximum pool size sets an upper limit on how many connections can exist simultaneously, preventing your application from overwhelming the database server.</li>    <li>Connection timeout settings specify how long the application should wait when requesting a connection from a pool that's currently at maximum capacity. If all connections are in use and none become available within this timeout period, the application will receive an error rather than waiting indefinitely.</li>     <li>The idle timeout parameter determines how long a connection can sit unused in the pool before being closed, which helps free up resources during periods of low activity.</li></ul><p>When configuring your pool, start conservatively. A good rule of thumb for the maximum pool size is to calculate the number based on your database server's capacity divided by the number of application instances that will connect to it. For example, if your database can handle 100 connections and you have five application servers, consider setting each application's maximum pool size to around 20 connections.</p><h1 class="blog-sub-title">Common Mistakes to Avoid</h1><p>One of the most frequent mistakes developers make is setting the pool size too large. While it might seem intuitive that more connections mean better performance, database servers actually work best with a moderate number of connections. Too many connections lead to excessive context switching and resource contention, ultimately degrading performance. Studies have shown that for many workloads, a pool size between 10 and 30 connections per application instance provides optimal throughput.</p><p>Another critical error is failing to properly return connections to the pool. If your application code opens a connection but doesn't close it due to an exception or programming oversight, that connection remains locked and unavailable to other parts of your application. Over time, this connection leak will exhaust your pool, causing new requests to timeout and fail. Always use try-finally blocks or equivalent constructs in your programming language to ensure connections are returned even when errors occur.</p><p>Developers sometimes also neglect to configure connection validation. Connections can become stale or broken due to network issues, database restarts, or timeout settings on the database server. Without validation checks, your application might retrieve a dead connection from the pool and fail when attempting to use it. Enabling connection testing ensures that the pool automatically detects and replaces broken connections before handing them to your application.</p><h1 class="blog-sub-title">Monitoring Connection Pool Performance</h1><p>Once you've configured connection pooling in your database, monitoring becomes essential to ensure your settings are appropriate for your workload. Tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a> can help by tracking overall database connection activity from the server's perspective, showing you metrics like the current number of active connections, connection patterns over time, and when connection counts spike unexpectedly. While Navicat Monitor observes connections at the database server level rather than within your application's connection pool itself, this server-side view provides valuable insight into whether your pool sizing decisions are creating the right balance. If you notice that your database consistently shows connection counts near your server's maximum capacity, or if you see frequent connection spikes that correlate with application slowdowns, these patterns suggest your application connection pools may need adjustment. Combining this server-level monitoring with application-level metrics from your pooling library gives you a complete picture of how connections flow through your entire system, helping you identify bottlenecks and optimize performance effectively.</p><h1 class="blog-sub-title">Conclusion</h1><p>Database connection pooling represents one of those infrastructure decisions that often goes unnoticed when done correctly but can cause significant problems when misconfigured. By maintaining a ready supply of reusable connections, properly configuring pool parameters for your specific workload, and avoiding common pitfalls like oversized pools and connection leaks, you can dramatically improve your application's performance and reliability. The time invested in understanding and properly implementing connection pooling pays dividends in the form of faster response times, better resource utilization, and a more stable application overall.</p></body></html>]]></description>
</item>
<item>
<title>Managing Database Credentials Securely</title>
<link>https://www.navicat.com/company/aboutus/blog/3554-managing-database-credentials-securely.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Managing Database Credentials Securely</title></head><body><b>Feb 13, 2026</b> by Robert Gravelle<br/><br/><p>Database credentials represent one of the most critical security assets in any organization. When these credentials fall into the wrong hands, the consequences can be devastating, from data breaches to regulatory fines and reputational damage. Understanding how to properly manage, store, and rotate these credentials is essential for maintaining a secure database environment.</p><h1 class="blog-sub-title">Understanding Secrets Management</h1><p>Secrets management refers to the tools, processes, and policies used to control access to sensitive authentication information. Rather than hardcoding passwords directly into application code or storing them in plain text configuration files, modern secrets management solutions provide encrypted storage with strict access controls. These systems act as centralized vaults where credentials can be stored, accessed programmatically, and audited comprehensively.</p><p>The fundamental principle behind effective secrets management is separation of concerns. Application code should never contain actual credentials but rather references to them. When an application needs to connect to a database, it requests the credentials from the secrets management system at runtime, uses them briefly for authentication, and then discards them from memory. This approach dramatically reduces the attack surface because credentials never persist in application code repositories or deployment packages.</p><p>Popular secrets management platforms like HashiCorp Vault, AWS Secrets Manager, and Azure Key Vault provide additional security layers through features like dynamic secrets generation, time-limited access tokens, and detailed audit logging. These systems ensure that every credential access is tracked, making it possible to identify suspicious patterns or unauthorized access attempts.</p><h1 class="blog-sub-title">Implementing Credential Rotation Strategies</h1><p>Credential rotation involves regularly changing passwords and access keys to limit the window of vulnerability if credentials become compromised. Without rotation, a single leaked password could provide indefinite access to your database. Establishing a rotation schedule based on your organization's risk profile is crucial, whether that means rotating credentials monthly, quarterly, or on-demand when security incidents occur.</p><p>Automated rotation is significantly more reliable than manual processes. Modern secrets management systems can automatically generate new passwords, update them in the database, and notify connected applications without requiring downtime. This automation eliminates the human error factor that often leads to security gaps during manual credential updates.</p><p>When implementing rotation, consider the impact on connected applications and services. Implementing a grace period where both old and new credentials remain valid temporarily can prevent service disruptions during the transition. Additionally, maintaining a clear inventory of all systems that use specific credentials helps ensure nothing gets overlooked during rotation cycles.</p><h1 class="blog-sub-title">Avoiding Common Security Pitfalls</h1><p>One of the most frequent mistakes organizations make is storing credentials in version control systems like Git. Even if a repository is private, this practice creates numerous copies of sensitive information across development machines and backup systems. Developers should use environment variables or configuration management tools instead, keeping credentials completely separate from source code.</p><p>Another critical pitfall involves insufficient access controls on credential storage locations. Configuration files containing database passwords should have restrictive file permissions, ensuring only the specific user account running the application can read them. Similarly, cloud storage buckets or secret management systems should enforce the principle of least privilege, granting access only to services and individuals who genuinely need it.</p><p>Default or weak passwords represent another common vulnerability. Many database installations ship with default administrative credentials that must be changed immediately upon deployment. Strong passwords should combine uppercase and lowercase letters, numbers, and special characters, with sufficient length to resist brute force attacks. Even better, consider using randomly generated passwords that humans never need to remember or type manually.</p><h1 class="blog-sub-title">How Navicat Supports Secure Credential Management</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, the popular database management and development tool, implements several security features to help protect your database credentials. When you save connection information, Navicat encrypts database passwords before storing them, ensuring that credentials aren't stored in plain text on your computer. The connection settings are saved in locations that only the logged-in user can access, preventing other users on the same system from viewing your database configurations.</p><p>For remote database connections, Navicat supports SSH tunneling, which establishes secure encrypted sessions between your client and the database server. This feature is particularly valuable when connecting to databases over untrusted networks, as it wraps all database traffic in an encrypted tunnel. You can authenticate these SSH connections using either passwords or public/private key pairs, with the latter providing stronger security against unauthorized access.</p><p>Navicat also includes support for SSL connections, allowing you to encrypt the communication channel between the client application and your database server. This prevents credentials and data from being intercepted during transmission. When working with <a class="default-links" href="https://www.navicat.com/en/products/navicat-cloud" target="_blank">Navicat Cloud</a>, the service uses encryption both in transit through SSL connections and at rest through server-side encryption, though it's worth noting that database passwords themselves are never synchronized to the cloud, only connection settings.</p><h1 class="blog-sub-title">Conclusion</h1><p>Managing database credentials securely requires a comprehensive approach that combines proper secrets management infrastructure, regular credential rotation, and vigilance against common security mistakes. By treating credentials as critical assets that deserve dedicated protection mechanisms, organizations can significantly reduce their risk of data breaches and unauthorized access. The investment in proper credential management practices pays dividends through improved security posture, easier compliance with regulatory requirements, and greater peace of mind knowing that your data remains protected.</p></body></html>]]></description>
</item>
<item>
<title>Building Resilient Database Architectures</title>
<link>https://www.navicat.com/company/aboutus/blog/3551-building-resilient-database-architectures.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Building Resilient Database Architectures</title></head><body><b>Feb 10, 2026</b> by Robert Gravelle<br/><br/><p>In today's fast-paced economy, database downtime can result in significant financial losses and damage to an organization's reputation. Building resilient database architectures has become indispensable for businesses that depend on continuous access to their data. A truly resilient database system can withstand failures, recover quickly from disasters, and maintain high availability even under adverse conditions.</p>    <h1 class="blog-sub-title">Components of a Resilient Database</h1>        <p>Database resilience refers to a system's ability to maintain operations during and after disruptions, whether they stem from hardware failures, software bugs, network issues, or natural disasters. A resilient architecture incorporates multiple layers of protection that work together to minimize downtime and data loss. This approach combines proactive planning with reactive capabilities, ensuring that when problems inevitably occur, their impact remains minimal and recovery happens swiftly.</p>    <h1 class="blog-sub-title">Disaster Recovery Planning</h1>        <p>Disaster recovery forms the foundation of database resilience by establishing procedures for restoring operations after catastrophic events. Effective disaster recovery begins with comprehensive backup strategies that capture both full and incremental snapshots of your data at regular intervals. These backups should be stored in geographically diverse locations to protect against regional disasters, with at least one copy maintained off-site or in a different cloud region.</p>        <p>Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are critical metrics that guide disaster recovery planning. RTO defines the maximum acceptable downtime, while RPO determines how much data loss your organization can tolerate. Understanding these metrics helps you design appropriate backup frequencies and recovery procedures. Regular disaster recovery drills ensure that your team can execute recovery plans smoothly under pressure, revealing potential weaknesses before a real crisis occurs.</p>    <h1 class="blog-sub-title">High Availability Strategies</h1>        <p>High availability focuses on minimizing planned and unplanned downtime through redundancy and automated failover mechanisms. Database replication creates multiple copies of your data across different servers or data centers, allowing traffic to be redirected if the primary database becomes unavailable. Synchronous replication ensures data consistency across all replicas but may introduce latency, while asynchronous replication offers better performance at the cost of potential data lag.</p>        <p>Load balancing distributes database queries across multiple servers, preventing any single system from becoming overwhelmed. This not only improves performance but also provides redundancy, as other servers can absorb the workload if one fails. Implementing connection pooling and caching layers further enhances availability by reducing the load on your database servers and providing faster response times for frequently accessed data.</p>    <h1 class="blog-sub-title">Chaos Engineering for Databases</h1>        <p>Chaos engineering represents a proactive approach to resilience by deliberately introducing controlled failures into your database systems to identify weaknesses before they cause real problems. This practice involves running experiments that simulate various failure scenarios, such as server crashes, network partitions, or sudden traffic spikes, while monitoring how your system responds.</p>        <p>Starting with non-production environments, chaos experiments might include killing database processes, introducing network latency, or exhausting system resources to observe how replication handles these disruptions. Gradually expanding these experiments to production systems during low-traffic periods builds confidence in your architecture's resilience. The insights gained from chaos engineering lead to improvements in monitoring, alerting, and automated recovery procedures that strengthen your overall database infrastructure.</p>    <h1 class="blog-sub-title">Navicat's Role in Database Resilience</h1>        <p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> provides comprehensive database management tools that support resilience through features like Data Synchronization, Data Transfer, and backups:</p>    <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>The Data Synchronization feature helps maintain consistency across multiple databases, which is essential for high availability configurations. The tool allows you to synchronize data between databases and set up regular synchronization tasks, ensuring your replicas remain up to date. </li>    <li>The Data Transfer feature facilitates smooth data migration between different database systems, minimizing risks of data loss or corruption during infrastructure changes or disaster recovery scenarios. </li>     <li>Navicat's backup functionality creates structured snapshots of your databases that can be restored quickly when needed, supporting disaster recovery planning with its user-friendly interface for creating and managing database backups.</li>    </ul>        <p>For monitoring and administration, <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a> provides real-time performance monitoring for your database server instances, helping you detect potential issues before they impact availability. The platform supports multiple database systems including MySQL, MariaDB, PostgreSQL and SQL Server. It's also compatible with cloud databases like Amazon RDS, Amazon Aurora, Oracle Cloud, Google Cloud and Microsoft Azure, making it valuable for organizations managing diverse database environments that need consistent resilience practices across different platforms.</p>    <h1 class="blog-sub-title">Conclusion</h1>        <p>Building resilient database architectures requires a comprehensive approach that combines disaster recovery planning, high availability strategies, and proactive testing through chaos engineering. By implementing multiple layers of protection and regularly testing your systems under stress, you create databases that can withstand failures and maintain operations even during adverse conditions. The investment in resilience pays dividends through reduced downtime, protected data, and the confidence that your critical systems can weather any storm.</p></body></html>]]></description>
</item>
<item>
<title>The Future of Database Licensing Models: Navigating the Shift in How We Pay for Data Infrastructure</title>
<link>https://www.navicat.com/company/aboutus/blog/3548-the-future-of-database-licensing-models-navigating-the-shift-in-how-we-pay-for-data-infrastructure.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>The Future of Database Licensing Models: Navigating the Shift in How We Pay for Data Infrastructure</title></head><body><b>Feb 6, 2026</b> by Robert Gravelle<br/><br/>    <p>Database licensing currently undergoing a significant transformation that will reshape how organizations budget for and deploy data infrastructure. Traditional perpetual licensing models, where organizations paid substantial upfront fees for indefinite database use, are giving way to subscription-based and consumption-driven approaches that promise greater flexibility but introduce new complexities. Simultaneously, the tension between open-core and fully open-source models is forcing organizations to reconsider their relationships with database vendors and the broader software strategy. Understanding these evolving licensing models has become essential for technology leaders making strategic decisions about their data infrastructure investments.</p>    <h1 class="blog-sub-title">The Subscription Revolution</h1>    <p>Subscription-based database licensing has emerged as the dominant model among major vendors, fundamentally changing the economics of database deployment. Rather than paying large capital expenditures for perpetual licenses, organizations now face predictable monthly or annual operational expenses. This shift aligns well with cloud adoption patterns and provides vendors with steady, recurring revenue streams. For organizations, subscription models lower initial barriers to entry and provide easier paths to scaling database capacity up or down based on changing needs. However, this approach introduces long-term cost considerations that differ markedly from traditional licensing, as organizations never truly own their database software and must maintain continuous payments to retain access. The total cost over a five or ten year period can significantly exceed traditional perpetual licensing costs, making careful financial modeling essential when evaluating subscription offerings.</p>    <h1 class="blog-sub-title">Usage-Based Pricing and Serverless Models</h1>    <p>Beyond simple subscriptions, usage-based pricing represents the next frontier in database licensing evolution. Serverless database offerings from cloud providers charge based on actual consumption of compute, storage, and I/O resources rather than pre-allocated capacity. This consumption model promises perfect alignment between costs and actual usage, eliminating waste from over-provisioned resources and allowing organizations to pay only for what they use. The appeal is particularly strong for workloads with variable or unpredictable demand patterns, where traditional capacity planning proves challenging. However, usage-based pricing introduces budget unpredictability that finance teams find difficult to manage, as database costs can fluctuate significantly month to month. Organizations adopting these models often find value in maintaining some fixed-cost components within their database infrastructure, such as management and development tools with stable licensing. Database management tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, which offer perpetual licensing with optional maintenance, provide a cost anchor that helps balance the variability of consumption-based database pricing, making overall infrastructure costs more manageable and predictable for planning purposes.</p>    <h1 class="blog-sub-title">Open Source Economics and the Open-Core Debate</h1>    <p>The tension between fully open-source databases and open-core models has intensified as vendors seek sustainable business models while maintaining developer communities. Fully open-source databases like PostgreSQL and MySQL have gained enormous market share by eliminating licensing fees entirely, fundamentally disrupting the database market. Organizations adopting these technologies save dramatically on direct licensing costs while gaining freedom from vendor lock-in and the ability to modify source code for specific needs. However, the total cost of ownership extends beyond the database itself to encompass management, monitoring, and development tools. Commercial database management tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> provide enterprise-grade capabilities for open-source databases, filling gaps in native tooling and offering features such as visual schema design, automated comparison and synchronization, and cross-database migration tools. This combined approach, combining license-free databases with commercial management tools, often proves more cost-effective than traditional commercial database stacks while providing greater deployment flexibility.</p>    <h1 class="blog-sub-title">The Open-Core Compromise</h1>    <p>Open-core models attempt to balance community development with commercial viability by offering core database functionality under open-source licenses while reserving advanced enterprise features for paid editions. MongoDB, Elasticsearch, and Redis have all employed variations of this approach, with varying degrees of community acceptance. These models allow developers to build and test applications using free versions while requiring payment when organizations deploy at scale or need enterprise capabilities like advanced security, monitoring, or high availability features. The challenge lies in determining where to draw the line between free and paid features, as overly restrictive open-core models can alienate developer communities while overly generous ones struggle to generate sufficient revenue. Recent licensing changes by several open-core vendors, restricting cloud provider usage of their software, have highlighted the ongoing tension between openness and commercial sustainability in this space.</p>    <h1 class="blog-sub-title">Vendor Independence and Strategic Flexibility</h1>    <p>As licensing models evolve, organizations increasingly recognize the strategic value of maintaining vendor independence and deployment flexibility. Vendor lock-in concerns extend beyond licensing terms to encompass proprietary management tools, specialized skill requirements, and migration difficulties. Organizations seeking to preserve their options are adopting database-agnostic management platforms that provide consistent interfaces across multiple database technologies. Tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> enable database professionals to work seamlessly across MySQL, PostgreSQL, MongoDB, SQL Server, and Oracle using familiar workflows, significantly reducing the friction and retraining costs associated with changing database vendors or maintaining heterogeneous database environments. This approach gives organizations greater leverage in licensing negotiations, as the ability to migrate between platforms becomes more practical and less disruptive to operations.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>The future of database licensing will likely see continued diversification rather than convergence on a single model. Organizations will navigate increasingly complex decisions that balance upfront costs against long-term expenses, flexibility against predictability, and vendor relationships against independence. Success in this shifting market requires holistic thinking that considers not just database licensing costs but the entire array of tools, skills, and architectural decisions that surround database deployments. Those who navigate these licensing decisions effectively will find opportunities to optimize costs while maintaining the flexibility to adapt as both technology and business requirements continue to evolve.</p></body></html>]]></description>
</item>
<item>
<title>Harnessing PostgreSQL Power: An Introduction to Supabase</title>
<link>https://www.navicat.com/company/aboutus/blog/3528-harnessing-postgresql-power-an-introduction-to-supabase.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Harnessing PostgreSQL Power: An Introduction to Supabase</title></head><body><b>Jan 30, 2026</b> by Robert Gravelle<br/><br/>        <p>Supabase has rapidly emerged as one of the most popular open-source backend-as-a-service platforms in the developer community, earning its place among the top 100 most-starred repositories on GitHub. This impressive achievement reflects the platform's ability to simplify complex backend development while maintaining the power and flexibility that modern applications demand. This article explores what makes Supabase unique, its core capabilities, and how it integrates with professional database tools like Navicat to streamline your development workflow.</p>    <h1 class="blog-sub-title">What is Supabase?</h1>        <p>Supabase is a comprehensive Postgres development platform that provides developers with everything needed to build modern web, mobile, and AI applications. At its core, every Supabase project consists of a full PostgreSQL database, which brings over 35 years of proven reliability and feature robustness to your applications. The platform describes itself as an open-source alternative to Firebase, but with the added advantage of using SQL and the full power of Postgres rather than NoSQL databases.</p>        <p>The platform's philosophy centers on simplicity and developer experience. Rather than requiring developers to piece together multiple services and manage complex infrastructure, Supabase delivers an integrated solution where authentication, database APIs, real-time subscriptions, storage, and serverless functions work together seamlessly. This unified approach means developers can focus on building features rather than configuring backends.</p>    <h1 class="blog-sub-title">Core Features and Capabilities</h1>        <p>The strength of Supabase lies in its comprehensive feature set. The platform automatically generates RESTful APIs for your database through PostgREST, eliminating the need to manually create API endpoints for basic CRUD operations. This auto-generated API respects your database's Row Level Security policies, ensuring that data access remains secure and properly controlled.</p>        <p>Real-time functionality is built directly into the platform through Supabase Realtime, an Elixir server that monitors PostgreSQL's replication system and broadcasts changes over WebSockets. This makes building collaborative tools, live dashboards, and real-time chat applications straightforward, as data updates are automatically pushed to connected clients whenever database changes occur.</p>        <p>Authentication is handled by GoTrue, a JWT-based authentication system that supports multiple providers including email, phone, and social logins. The authentication system integrates tightly with Row Level Security, allowing developers to implement fine-grained access control policies directly in the database. Supabase Storage provides S3-compatible file storage with permissions managed through PostgreSQL, maintaining consistency across your entire application stack.</p>        <p>Edge Functions bring serverless capabilities to Supabase, allowing developers to write custom backend logic without managing servers. These functions run on Deno and can be deployed globally, with recent optimizations reducing function boot times by up to 300 percent in many cases. The platform has also introduced support for vector embeddings through the pgvector extension, positioning Supabase as a powerful option for AI applications that require semantic search and similarity matching.</p>    <h1 class="blog-sub-title">Development Experience and Tooling</h1>        <p>Supabase provides a sophisticated web-based dashboard that makes database management accessible even to developers who aren't PostgreSQL experts. The table editor presents a spreadsheet-like interface for viewing and editing data, while the SQL editor includes helpful features like query history and favorited queries. Recent updates have introduced tabbed interfaces in both the table and SQL editors, making it easier to work with multiple queries and tables simultaneously.</p>        <p>The platform's commitment to developer experience extends to its documentation and AI-powered assistance. Supabase recently launched an AI Assistant within the dashboard that can help with query optimization, schema design, and general troubleshooting. The platform also introduced postgres.new, a browser-based tool that uses large language models to help developers interact with PostgreSQL more intuitively.</p>    <h1 class="blog-sub-title">How Navicat Supports Supabase</h1>        <p>For developers who prefer working with dedicated database management tools, Navicat provides excellent support for Supabase databases. <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL</a> and <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> can connect directly to Supabase instances, offering a professional-grade interface for database administration and development that complements Supabase's built-in tools.</p>        <p>Connecting Navicat to Supabase is straightforward using the session pooler connection string available in your Supabase project settings. Navicat's intuitive graphical interface allows you to create, modify, and manage database objects like tables, views, functions, and triggers through visual designers rather than writing complex SQL. The latest version, Navicat 17.2, includes an AI Assistant, enhanced query visualization tools, and comprehensive data profiling capabilities that help you understand and optimize your Supabase database structure.</p>        <p>Navicat excels at tasks like data migration, allowing you to transfer data between Supabase and other database systems, and provides sophisticated backup and restore functionality. The visual query builder and execution plan analyzer are particularly useful for optimizing complex queries on your Supabase database. For teams working across multiple database platforms, Navicat Premium can manage Supabase alongside MySQL, MongoDB, SQL Server, and other databases from a single application, streamlining workflows for developers managing diverse data infrastructure.</p>    <h1 class="blog-sub-title">Conclusion</h1>        <p>Supabase represents a significant evolution in how developers approach backend infrastructure. By combining the reliability of PostgreSQL with modern developer tooling, real-time capabilities, and integrated authentication, the platform delivers a complete backend solution that scales from prototype to production. Its open-source nature ensures transparency and portability, while its growing ecosystem of tools and integrations, including support from established database management platforms like Navicat, demonstrates the platform's maturity and adoption. Whether you're building a startup MVP or an enterprise application, Supabase provides the foundation to move quickly without sacrificing the power and flexibility that complex applications demand.</p></body></html>]]></description>
</item>
<item>
<title>The ROI of Database Automation: Quantifying the Business Value of Automated Tuning, Patching, and Optimization</title>
<link>https://www.navicat.com/company/aboutus/blog/3527-the-roi-of-database-automation-quantifying-the-business-value-of-automated-tuning,-patching,-and-optimization.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The ROI of Database Automation: Quantifying the Business Value of Automated Tuning, Patching, and Optimization</title></head><body><b>Jan 23, 2026</b> by Robert Gravelle<br/><br/><p>Database performance and reliability have become critical determinants of business success, directly influencing revenue, customer satisfaction, and competitive positioning. Yet many organizations still rely on manual processes for critical database tasks like tuning, patching, and optimization. As databases grow in complexity and scale, the hidden costs of manual management compound rapidly. Database automation represents not just a technical upgrade, but a strategic investment with measurable returns that extend across the entire organization.</p>        <h1 class="blog-sub-title">The Hidden Costs of Manual Database Management</h1>        <p>Manual database administration carries costs that extend far beyond salary expenses. Database administrators spend countless hours on repetitive tasks such as applying security patches, monitoring performance metrics, and adjusting configuration parameters. This time investment translates directly to opportunity cost, as skilled DBAs are diverted from strategic initiatives that could drive innovation and business growth. Human error introduces another significant cost factor, with misconfigurations or delayed patches potentially leading to security breaches, data corruption, or system downtime. A single database outage can cost enterprises thousands or even millions of dollars per hour, not to mention the reputational damage that accompanies service disruptions.</p>        <h1 class="blog-sub-title">Quantifying the Benefits of Automation</h1>        <p>The business value of database automation becomes clear when examining specific operational improvements. Automated tuning continuously monitors database performance and adjusts parameters in real-time, eliminating the lag time between problem detection and resolution. Organizations implementing automated tuning typically see query performance improvements of twenty to forty percent, which translates directly to faster application response times and better user experiences. Automated patching reduces vulnerability windows from weeks to hours by deploying security updates immediately upon release, significantly decreasing the risk of exploitation. Furthermore, automation enables database teams to scale their operations without proportional increases in headcount, with some organizations managing three to five times more database instances per administrator after implementing automation tools.</p>        <h1 class="blog-sub-title">Measuring ROI Across Key Metrics</h1>        <p>The financial benefits of database automation appear in several interconnected areas of business operations. The most immediate savings come from reduced labor costs, as automation handles routine tasks that previously consumed thirty to fifty percent of DBA time. Downtime reduction represents another major financial benefit, with automated monitoring and self-healing capabilities preventing incidents before they impact users. Organizations should also consider the cost avoidance from improved security posture, as automated patching and compliance monitoring help prevent breaches that could result in regulatory fines and remediation expenses. Performance improvements drive revenue growth by enabling applications to handle higher transaction volumes and deliver superior customer experiences. When calculating total ROI, forward-thinking organizations include the strategic value of freeing technical talent to focus on innovation rather than maintenance.</p>        <h1 class="blog-sub-title">Automating Database Tasks with Navicat</h1>        <p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> offers comprehensive automation capabilities through its Automation utility, which allows database administrators to create and schedule batch jobs for recurring tasks. The platform enables teams to automate queries, data imports and exports, data transfers between systems, and database synchronization operations.Organizations can set up automated data synchronization processes that run on predefined schedules, ensuring consistency across development, testing, and production environments without manual intervention.</p>        <p>Navicat's automation features include the ability to generate and export database documentation as PDFs on automated schedules, with email notifications sent to stakeholders upon completion. This streamlines compliance reporting and documentation requirements. The platform also supports automated backup operations through user-friendly interfaces for database-native backup utilities, helping teams maintain consistent data protection practices without relying on manual processes. By integrating automation into their workflows through tools like Navicat, database teams can achieve many of the ROI benefits discussed throughout this article while maintaining control and visibility over their database operations.</p>        <h1 class="blog-sub-title">Conclusion</h1>        <p>Database automation represents a compelling investment for organizations of any size. The quantifiable returns including reduced labor costs, minimized downtime, improved performance, and enhanced security create a strong financial case for automation. Beyond the immediate metrics, automation positions IT organizations for future growth by enabling teams to manage increasing database complexity without proportional resource increases. As databases continue to serve as the backbone of digital business, the question is no longer whether to automate, but how quickly organizations can implement automation to maintain competitive advantage.</p></body></html>]]></description>
</item>
<item>
<title>Database Observability: The New Frontier in Performance Management</title>
<link>https://www.navicat.com/company/aboutus/blog/3498-database-observability-the-new-frontier-in-performance-management.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Database Observability: The New Frontier in Performance Management</title></head><body><b>Jan 16, 2026</b> by Robert Gravelle<br/><br/><p>Modern databases power everything from e-commerce platforms to healthcare systems, making their reliable performance absolutely critical to business operations. Yet traditional monitoring approaches that simply track CPU usage and memory consumption no longer meet the needs of today's complex data infrastructures. Database observability represents a fundamental shift in how organizations understand and optimize their database performance, transforming reactive troubleshooting into proactive performance management.</p>        <h1 class="blog-sub-title">Monitoring vs. Observability</h1>        <p>Traditional monitoring tells you that something is wrongperhaps response times have slowed or error rates have increased. Database observability, however, goes several steps further by helping you understand why problems occur and how to prevent them. This approach incorporates three essential pillars: metrics that quantify performance, logs that record system events, and traces that follow individual transactions through your infrastructure. Together, these elements provide the contextual insights needed to diagnose issues quickly and optimize performance continuously.</p>        <p>The difference becomes particularly apparent in distributed architectures where databases span multiple environments and interact with numerous applications. While traditional monitoring might alert you to slow response times, observability platforms can pinpoint the exact query causing bottlenecks, identify underutilized indexes, and even suggest optimization strategies based on historical patterns. This deeper visibility empowers database administrators to move from firefighting to strategic performance optimization.</p>        <h1 class="blog-sub-title">The Evolution of Specialized Monitoring Tools</h1>        <p>As database environments have grown more complex, specialized observability platforms have emerged to address these challenges. Leading solutions provide comprehensive visibility across multiple database engines, offering features such as query-level performance tracking, execution plan analysis, and automated anomaly detection. These platforms excel at correlating database performance with application metrics, helping teams understand how database issues impact overall user experience.</p>        <p>What makes modern tools particularly powerful is their ability to collect and analyze vast amounts of performance data in real time. They can track query execution patterns, monitor resource utilization across database clusters, and detect subtle performance degradations before they escalate into serious problems. Many platforms also incorporate machine learning algorithms that establish baseline performance profiles and automatically alert administrators when behavior deviates from normal patterns.</p>        <h1 class="blog-sub-title">Navicat Monitor: Comprehensive Database Performance Management</h1>        <p><a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a> exemplifies modern database observability through its agentless architecture that monitors MySQL, MariaDB, PostgreSQL, and SQL Server instances without requiring software installation on database servers. The platform supports both locally hosted instances and popular cloud services including Amazon RDS, Amazon Aurora, Oracle Cloud, Google Cloud, and Microsoft Azure, making it particularly valuable for organizations managing heterogeneous database environments.</p>        <p>The platform provides advanced root cause analysis capabilities that enable administrators to drill down into server metrics, performance statistics, hardware usage, and historical data when issues arise. Its built-in alert system allows administrators to define custom thresholds and receive notifications via email, SMS, SNMP, or Slack when warning or critical conditions occur, ensuring databases remain constantly available and performing optimally.</p>        <p>Navicat Monitor includes rich real-time and historical graphs that provide detailed views of server load, performance, availability, disk usage, network throughput, table locks, and replication health. The platform's Query Analyzer identifies long-running queries based on execution duration, wait types, CPU usage, and database read-write operations, allowing administrators to quickly identify and resolve performance bottlenecks. Users can also create custom metrics by writing their own queries to collect performance data specific to their needs and receive alerts when values exceed defined thresholds.</p>        <h1 class="blog-sub-title">The Future of Database Performance Management</h1>        <p>Database observability platforms represent a critical evolution in how organizations manage their data infrastructure. As databases continue to grow in complexity and importance, the deep visibility provided by observability tools becomes not just beneficial but essential. The integration of machine learning, automated diagnostics, and predictive analytics into these platforms promises even greater capabilities in the future, enabling truly proactive database management where potential issues are identified and resolved before they impact users. For organizations seeking to maintain competitive advantage in an increasingly data-driven world, adopting comprehensive observability solutions is no longer optional  it's a strategic necessity!</p></body></html>]]></description>
</item>
<item>
<title>The Database Skills Gap Crisis: Navigating the Shortage of Database Professionals</title>
<link>https://www.navicat.com/company/aboutus/blog/3496-the-database-skills-gap-crisis-navigating-the-shortage-of-database-professionals.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <title>The Database Skills Gap Crisis: Navigating the Shortage of Database Professionals</title></head><body><b>Jan 9, 2026</b> by Robert Gravelle<br/><br/>    <p>A critical shortage of skilled database professional is threatening the digital transformation initiatives of organizations across a range of industries. As data volumes explode and database technologies proliferate, the demand for experienced database administrators, architects, and engineers has far outpaced the available talent pool. This skills gap has forced companies to rethink their approach to database management, accelerating the adoption of automation tools, low-code platforms, and productivity-enhancing technologies. Understanding this crisis and the strategies organizations are employing to address it has become essential for technology leaders across the globe.</p>    <h1 class="blog-sub-title">The Root Causes of the Crisis</h1>    <p>The database skills shortage stems from multiple converging factors. The rapid diversification of database technologies means organizations now operate heterogeneous environments running MySQL, PostgreSQL, MongoDB, Oracle, and SQL Server simultaneously, often alongside specialized systems for time-series data, graph databases, and vector search. Traditional approaches that relied on platform-specific specialists have become economically unfeasible, as finding experts for each database type proves nearly impossible in today's tight labor market. Compounding this challenge, experienced database professionals are retiring faster than new talent enters the field, while database complexity continues to increase with cloud migrations, distributed architectures, and real-time processing requirements.</p>    <h1 class="blog-sub-title">Productivity Tools to the Rescue</h1>    <p>In response to talent scarcity, organizations have increasingly turned to comprehensive database management tools that enable smaller teams to manage larger, more diverse database portfolios. Modern platforms provide visual interfaces that reduce dependency on command-line expertise while offering unified management across multiple database types. Tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> have become essential for organizations facing the skills gap, as they allow database professionals to work efficiently across MySQL, PostgreSQL, MongoDB, SQL Server, Oracle, and other platforms through a single consistent interface. Features such as visual query builders, automated schema comparison, and data modeling capabilities lower the barrier to entry for less experienced staff while maintaining the productivity of seasoned professionals.</p>    <h1 class="blog-sub-title">Adapting Through Democratization</h1>    <p>The shortage has accelerated the democratization of database management, with organizations enabling developers, data analysts, and other technical staff to perform tasks that previously required dedicated DBA expertise. This shift has been facilitated by intuitive database management platforms that abstract complexity without sacrificing capability. By providing graphical schema design tools, automated backup scheduling, and simplified data migration features, these platforms allow non-specialists to handle routine database operations competently. This democratization reduces bottlenecks created by DBA shortages and frees specialized database professionals to focus on complex architectural decisions, performance optimization, and strategic initiatives that truly require deep expertise.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>The database skills gap represents a fundamental challenge for modern organizations, but it has also driven innovation in how database work is approached and distributed across technical teams. By combining strategic tool adoption with workforce development initiatives and organizational restructuring, companies are finding ways to maintain and even expand their database capabilities despite persistent talent shortages. Success in this environment requires embracing automation, investing in platforms that multiply individual productivity, and cultivating versatile database professionals who can work effectively across multiple technologies rather than specializing narrowly in single platforms.</p></body></html>]]></description>
</item>
<item>
<title>The Economics of Multi-Cloud Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/3494-the-economics-of-multi-cloud-databases.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The Economics of Multi-Cloud Databases</title></head><body><b>Jan 2, 2026</b> by Robert Gravelle<br/><br/><p>Organizations today face increasingly complex decisions about where and how to deploy their database infrastructure. Multi-cloud database strategies, which involve distributing data systems across multiple cloud providers such as AWS, Azure, and Google Cloud, have emerged as a viable approach for enterprises seeking to balance cost, performance, and flexibility. Understanding the economic implications of these deployments is essential for making informed strategic decisions that align with both technical requirements and business objectives. With that in mind, today's blog article will cover important cost analysis considerations, how to avoid vendor lock-in, and more!</p>    <h1 class="blog-sub-title">Cost Analysis and Optimization</h1>        <p>The financial landscape of multi-cloud databases presents both opportunities and challenges. While a single-cloud approach might seem simpler from a cost management perspective, multi-cloud strategies can unlock significant savings through competitive pricing dynamics. Different cloud providers offer varying price points for similar database services, and organizations can leverage these differences by selecting the most cost-effective option for specific workloads. For instance, one provider might offer superior pricing for high-throughput transactional databases, while another excels in cost efficiency for analytics workloads.</p>        <p>However, the economic picture extends beyond simple service pricing. Data transfer costs between clouds, often called egress fees, can accumulate quickly and erode potential savings. Organizations must carefully model their data flow patterns and access requirements to avoid unexpected charges. Additionally, the operational overhead of managing multiple cloud environments requires investment in skilled personnel and sophisticated management tools, which must be factored into the total cost of ownership calculation.</p>    <h1 class="blog-sub-title">Vendor Lock-In Avoidance</h1>        <p>Perhaps the most compelling economic argument for multi-cloud databases lies in reducing dependency on any single cloud provider. Vendor lock-in creates significant business risk by limiting negotiating leverage and restricting architectural flexibility. When all database infrastructure resides with one provider, organizations may face unfavorable pricing changes with limited alternatives. A multi-cloud approach fundamentally changes this dynamic by maintaining genuine optionality.</p>        <p>The strategic value of avoiding lock-in extends beyond pricing negotiations. Technology landscapes evolve rapidly, and the leading database services or features today may not maintain their competitive advantage indefinitely. By maintaining infrastructure across multiple clouds, organizations can more easily adopt emerging technologies and services from different providers without undertaking massive migration projects. This architectural flexibility translates directly into economic value by enabling faster responses to market opportunities and reducing the risk of technical debt.</p>    <h1 class="blog-sub-title">Strategic Considerations</h1>        <p>Successful multi-cloud database deployment requires careful strategic planning that balances technical requirements with economic realities. Data residency and sovereignty regulations increasingly dictate where certain data must be stored, making multi-cloud approaches not just economically advantageous but sometimes legally necessary. Organizations operating globally may find that certain regions are better served by specific cloud providers due to data center proximity, regulatory compliance, or local partnership arrangements.</p>        <p>Performance considerations also play a crucial economic role. Distributing databases across multiple clouds can improve application resilience and reduce latency by placing data geographically closer to users. However, these benefits must be weighed against the complexity of maintaining data consistency and the potential costs of cross-cloud data synchronization. Organizations must develop clear decision frameworks that evaluate which workloads benefit most from multi-cloud deployment and which are better served by single-cloud simplicity.</p>    <h1 class="blog-sub-title">Managing Multi-Cloud Databases with Navicat</h1>        <p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> provides comprehensive support for managing databases across multiple cloud environments, offering simultaneous connections to MySQL, Redis, PostgreSQL, MongoDB, MariaDB, SQL Server, Oracle, Snowflake, and SQLite databases from a single application. The platform is compatible with major cloud databases including Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud, MongoDB Atlas, and others, making it particularly valuable for organizations implementing multi-cloud strategies.</p>        <p>Navicat also offers advanced features including data modeling tools that support Relational, Dimensional, and Data Vault 2.0 methods, visual query builders, and the ability to synchronize connection settings, queries, and workspaces through <a class="default-links" href="https://www.navicat.com/en/products/navicat-cloud" target="_blank">Navicat Cloud</a>. This centralized approach significantly reduces the operational complexity and associated costs of managing heterogeneous database environments across multiple cloud providers, allowing database administrators to work with consistent tools and interfaces regardless of the underlying cloud platform.</p>    <h1 class="blog-sub-title">Conclusion</h1>        <p>The economics of multi-cloud databases represent a highly nuanced calculation that extends well beyond simple price comparisons. While multi-cloud strategies offer genuine opportunities for cost optimization, vendor lock-in avoidance, and strategic flexibility, they also introduce complexities that require careful management. Organizations must approach multi-cloud database deployment with clear economic models that account for direct costs, operational overhead, and strategic value. When implemented thoughtfully with appropriate management tools and governance frameworks, multi-cloud database strategies can deliver substantial economic benefits while positioning organizations for long-term technical and business success. The key lies in treating multi-cloud not as a universal solution, but as a strategic option to be deployed where it creates genuine value.</p></body></html>]]></description>
</item>
<item>
<title>Reimagining Consensus: New Approaches to Consistency in Distributed Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/3492-reimagining-consensus-new-approaches-to-consistency-in-distributed-databases.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Reimagining Consensus: New Approaches to Consistency in Distributed Databases</title></head><body><b>Dec 24, 2025</b> by Robert Gravelle<br/><br/><p>For years, Raft and Paxos have been the foundational pillars of distributed consensus in database systems. These algorithms revolutionized how distributed databases could maintain consistency across multiple nodes, providing reliable ways to agree on data values even in the face of network partitions and node failures. However, as applications have become increasingly global and data volumes have exploded, the database community has recognized that traditional consensus algorithms, while robust, can create bottlenecks in performance and scalability.</p><p>The emergence of new consensus mechanisms represents a fundamental shift in how we think about distributed databases. Modern approaches are designed from the ground up to handle the unique challenges of globally distributed systems, where network latency between distant data centers can be measured in hundreds of milliseconds rather than single-digit values. These next-generation algorithms prioritize not just correctness, but also throughput, latency reduction, and efficient resource utilization across geographically dispersed infrastructure.</p><h1 class="blog-sub-title">Multi-Leader and Leaderless Approaches</h1><p>One of the most significant departures from traditional consensus algorithms is the move away from single-leader architectures. While Raft and Paxos rely on a leader node to coordinate writes, newer approaches embrace multi-leader or even leaderless architectures that can accept writes at multiple locations simultaneously. This architectural shift dramatically reduces write latency for globally distributed applications, as clients can write to the nearest data center without waiting for coordination with a distant leader node.</p><p>Conflict-free Replicated Data Types, or CRDTs, represent a particularly elegant solution to the consensus challenge. Rather than requiring nodes to agree on the order of operations before applying them, CRDTs are mathematical structures designed to converge to the same state regardless of the order in which operations are received. This allows databases to achieve eventual consistency without the coordination overhead of traditional consensus, enabling exceptional performance for use cases where temporary divergence is acceptable.</p><h1 class="blog-sub-title">Optimistic Concurrency and Hybrid Models</h1><p>Another frontier in distributed consensus involves optimistic concurrency control mechanisms that assume conflicts are rare and handle them as exceptions rather than as the norm. These systems allow transactions to proceed without extensive locking or coordination, validating consistency only at commit time. When combined with intelligent conflict resolution strategies, this approach can deliver remarkable performance improvements for workloads where contention is naturally low.</p><p>Hybrid consensus models are also gaining traction, intelligently selecting between different consistency levels and coordination mechanisms based on the specific requirements of each transaction. These adaptive systems might use strong consistency with traditional consensus for critical financial transactions, while employing looser consistency guarantees for less critical operations like user preference updates. This flexibility allows database systems to optimize for both correctness and performance across diverse workloads.</p><h1 class="blog-sub-title">Working with Distributed Databases Using Navicat</h1><p>As organizations adopt these sophisticated distributed database architectures, effective management tools become essential. Navicat, a comprehensive database management and development platform, provides excellent support for working with distributed database systems. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> enables database administrators to simultaneously connect to multiple database platforms including MySQL, PostgreSQL, MongoDB, Redis, and cloud-based solutions, making it well-suited for managing distributed database deployments that often span multiple database technologies.</p><p>Navicat's compatibility with major cloud database services, including Amazon RDS, Amazon Aurora, Microsoft Azure SQL Database, Google Cloud SQL, and MongoDB Atlas, allows teams to manage distributed databases across different cloud providers from a single interface. The platform's data transfer and synchronization capabilities are particularly valuable for distributed systems, enabling administrators to migrate data and maintain consistency across geographically distributed nodes. With features like secure SSH tunneling and SSL connections, Navicat ensures that management operations remain secure even when working with databases distributed across multiple regions and cloud environments.</p><h1 class="blog-sub-title">Conclusion</h1><p>The landscape of distributed consensus algorithms continues to evolve rapidly, driven by the demands of global-scale applications and the architectural possibilities enabled by cloud infrastructure. While Raft and Paxos remain important foundations, the future belongs to more nuanced approaches that can adapt to varying requirements for consistency, performance, and availability. As these technologies mature, they promise to make truly global, highly responsive distributed databases accessible to a broader range of applications, fundamentally changing how we build data-intensive systems at planetary scale.</p></body></html>]]></description>
</item>
<item>
<title>Database Containers and Kubernetes Evolution</title>
<link>https://www.navicat.com/company/aboutus/blog/3490-database-containers-and-kubernetes-evolution.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Database Containers and Kubernetes Evolution</title></head><body><b>Dec 19, 2025</b> by Robert Gravelle<br/><br/><p>The journey of running databases in containerized environments has been a transformative one, marking a significant shift from the early days when Kubernetes was designed primarily for stateless applications. Today, containerized databases represent a mature technology stack that enables organizations to manage data workloads with the same agility and scalability they've come to expect from their application layers. This evolution has been driven by innovations in persistent storage, specialized orchestration tools, and a growing understanding of how to balance the dynamic nature of containers with the stability requirements of stateful data systems.</p> <h1 class="blog-sub-title">How StatefulSets Changed the Game</h1><p>When Kubernetes first emerged in 2014, it excelled at managing stateless containerized applications but struggled with databases and other stateful workloads. The introduction of StatefulSets in Kubernetes 1.5 marked a pivotal moment in this evolution, providing the foundational features necessary for managing stateful applications. Unlike standard Deployments, StatefulSets maintain stable network identities for pods, ensure ordered deployment and scaling, and provide persistent storage that survives pod rescheduling. This means that each database instance receives a predictable hostname and storage volume that persists even when the pod moves between nodes, addressing one of the fundamental challenges of running databases in short-lived container environments.</p><p>StatefulSets also introduced ordered graceful deployment and scaling, which is critical for database clusters that require specific initialization sequences or leader election processes. When scaling a database cluster up or down, StatefulSets ensure that operations happen in a controlled, sequential manner rather than all at once, preventing data inconsistencies and ensuring that replication relationships remain intact throughout the process.</p> <h1 class="blog-sub-title">Operators: Bridging the Gap Between Kubernetes and Database Management</h1><p>While StatefulSets provided the infrastructure foundation, Kubernetes Operators emerged as the intelligent layer that brings database-specific expertise into the orchestration process. Operators extend the Kubernetes API through Custom Resource Definitions, allowing administrators to define database-specific resources such as backup policies, replication configurations, and scaling strategies. These operators contain controller logic that continuously watches the state of database deployments and executes the necessary actions to maintain desired configurations through reconciliation loops.</p><p>The sophistication of modern database operators has transformed how teams approach database lifecycle management in Kubernetes environments. Rather than manually executing backup procedures or failover operations, operators automate these complex workflows with an understanding of database-specific requirements. For PostgreSQL deployments, operators can automatically handle streaming replication setup, while MongoDB operators understand sharding configurations and can orchestrate complex cluster topologies. This automation is particularly valuable because it encodes years of database administration expertise into code that runs continuously, catching issues before they become problems and ensuring that best practices are consistently applied.</p> <h1 class="blog-sub-title">The Persistent Storage Challenge</h1><p>Perhaps no aspect of containerized databases has been more complex than persistent storage. Kubernetes initially relied on ephemeral storage that disappeared when pods terminated, which was fundamentally incompatible with database workloads where data durability is paramount. The evolution of Persistent Volumes and Persistent Volume Claims addressed this challenge by providing an abstraction layer between storage infrastructure and the applications consuming it. Storage Classes emerged to enable dynamic provisioning, allowing databases to request storage with specific performance characteristics without administrators needing to pre-provision volumes manually.</p><p>However, persistent storage in Kubernetes environments introduces challenges that extend beyond simple volume mounting. Performance considerations become critical when database workloads demand consistent IOPS and low latency that can vary significantly across different storage backends. Network-attached storage solutions must balance accessibility across nodes with the performance overhead of remote access, while local storage offers excellent performance but complicates pod scheduling and failover scenarios. Backup and disaster recovery strategies also require careful planning, as traditional approaches may not translate directly to containerized environments where volumes are dynamically provisioned and pods may be ephemeral.</p> <h1 class="blog-sub-title">Working with Containerized Databases Using Modern Tools</h1><p>As containerized databases have matured, the choice of tools for managing and interacting with them has grown accordingly. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, a comprehensive database management tool, can connect to and work with containerized databases running in Docker and Kubernetes environments. When databases are deployed in containers with properly exposed ports, Navicat connects to them just as it would to traditional database instances, using the container's mapped network ports or cluster service endpoints. The platform supports a wide range of database systems commonly deployed in containers, including MySQL, PostgreSQL, MongoDB, Redis, and many others, providing a familiar graphical interface for database administration tasks regardless of whether the underlying database runs in a container or on traditional infrastructure.</p><p>Additionally, Navicat itself offers containerized deployment options, with both Navicat Monitor and Navicat On-Prem Server available as Docker images that can be deployed in containerized environments. This flexibility allows organizations to maintain consistent tooling across both traditional and cloud-native architectures, managing containerized databases with the same robust feature set that Navicat provides for conventional deployments.</p> <h1 class="blog-sub-title">Conclusion</h1><p>The maturation of containerized databases represents a remarkable achievement in cloud-native technology, transforming what was once considered impossible into a production-ready approach for managing data workloads. Through the introduction of StatefulSets, the development of sophisticated operators, and the evolution of persistent storage solutions, Kubernetes has evolved from a platform hostile to stateful workloads into one that can reliably run mission-critical database systems. While challenges remain around performance optimization, storage management, and operational complexity, the trajectory is clear: containerized databases are not just viable but increasingly preferred for organizations seeking the agility and consistency that cloud-native architectures provide. As tooling and best practices continues to mature, we can expect containerized databases to become the standard rather than the exception.</p></body></html>]]></description>
</item>
<item>
<title>Databases Meet WebAssembly: Bringing Data Processing to the Browser and Beyond</title>
<link>https://www.navicat.com/company/aboutus/blog/3488-databases-meet-webassembly-bringing-data-processing-to-the-browser-and-beyond.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Databases Meet WebAssembly: Bringing Data Processing to the Browser and Beyond</title></head><body><b>Dec 12, 2025</b> by Robert Gravelle<br/><br/><p>For decades, databases have been firmly planted on servers and in data centers, accessible only through network calls from client applications. WebAssembly (WASM) is fundamentally changing this equation by enabling database engines to run directly in browsers, edge computing environments, and serverless platforms with performance that rivals native applications. This technological convergence opens new possibilities for developers, from offline-first applications to distributed data processing at the network edge. In this article, we'll examine some concrete examples of WASM databases both new and traditional, and learn about the tools available for managing these distributed data workloads.</p><h1 class="blog-sub-title">How WebAssembly Enables Database Portability</h1><p>WebAssembly is a binary instruction format designed for efficient execution across different platforms. By compiling database engines to WASM, developers can achieve near-native performance while maintaining cross-platform compatibility. This means a single compiled database binary can run in a browser on Windows, macOS, Linux, or mobile devices without modification. The sandbox environment that WASM provides also enhances security, isolating database operations from the host system while still allowing rapid data processing. This combination of portability, performance, and security makes WASM an ideal target for database engines designed for modern, distributed computing scenarios.</p><h1 class="blog-sub-title">Examples of WASM-First and WASM-Enabled Databases</h1><p>Several databases have embraced WebAssembly to extend their reach. SQLite, one of the world's most widely used databases, has been compiled to WASM, enabling lightweight SQL execution in browsers and edge environments. DuckDB, a powerful analytical database optimized for OLAP workloads, offers WASM distributions for in-browser data analysis without server-side processing. These WASM-native options are purpose-built for edge and browser environments.</p><p>Beyond these specialized projects, traditional database engines have also developed WASM support. PostgreSQL can run in browser environments through WASM compilation, allowing developers to build sophisticated applications with full PostgreSQL compatibility. MySQL similarly has WASM implementations available, bringing familiar relational database capabilities to web applications. MongoDB, the popular NoSQL database, has explored WASM deployments for embedded scenarios. Redis, the in-memory data store, also supports WASM configurations, enabling fast caching and session management directly in edge environments.</p><h1 class="blog-sub-title">Real-World Applications</h1><p>The implications of WASM databases extend across multiple use cases. Web applications can now function offline with full data persistence, synchronizing when connectivity returns. Data analysts can perform complex queries on large datasets directly in the browser without uploading sensitive information to external servers. Edge computing platforms can process and filter data closer to users, reducing latency and bandwidth costs. Serverless functions gain the ability to perform sophisticated database operations within their resource constraints, previously requiring external database connections.</p><h1 class="blog-sub-title">Managing WASM Databases with Navicat</h1><p>As database deployments increasingly span traditional servers and WASM environments, developers need tools that work across this diverse landscape. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, a widely-used database management platform, can work with several databases that have WASM implementations, including PostgreSQL, MySQL, MongoDB, and Redis. This capability allows developers to manage their databases through a familiar interface whether they're running in traditional data centers or in WASM environments, streamlining database administration and development workflows across modern application architectures.</p><h1 class="blog-sub-title">Looking Forward</h1><p>The convergence of databases and WebAssembly represents a significant shift in how data is processed and managed. As more database engines gain WASM support and developer tools mature, we can expect increasingly sophisticated applications that leverage distributed data processing, offline-first architecture, and edge computing. The future of databases is becoming less about location and more about capability, with WASM ensuring that powerful data processing is available wherever it's needed.</p></body></html>]]></description>
</item>
<item>
<title>Database Security in the Age of AI</title>
<link>https://www.navicat.com/company/aboutus/blog/3485-database-security-in-the-age-of-ai.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Database Security in the Age of AI</title></head><body><b>Dec 5, 2025</b> by Robert Gravelle<br/><br/><p>The intersection of artificial intelligence and cybersecurity has had a tremendous impact on how organizations protect their most valuable asset: data. As AI technologies become increasingly sophisticated, they present both unprecedented opportunities for enhanced database security and novel threats that traditional protection mechanisms struggle to address. Database administrators now face the dual challenge of defending against AI-powered attacks while leveraging AI itself to strengthen their security posture.</p><h1 class="blog-sub-title">Advanced Threat Detection Through AI</h1><p>Modern databases are incorporating machine learning algorithms that continuously analyze access patterns, query behaviors, and data flows to identify anomalies that might indicate a security breach. These AI-driven systems can detect subtle deviations from normal operations that would be nearly impossible for human administrators to spot. By establishing baseline behaviors for users, applications, and network traffic, machine learning models can flag unusual activities in real-time, such as unauthorized access attempts, abnormal data exfiltration patterns, or suspicious query structures that might indicate SQL injection attempts.</p><p>The advantage of AI-powered threat detection lies in its ability to learn and adapt. Unlike static rule-based systems, these intelligent solutions continuously refine their understanding of what constitutes normal versus suspicious behavior. They can identify zero-day threats and novel attack vectors by recognizing patterns that deviate from established norms, even when those patterns don't match any known attack signatures.</p><h1 class="blog-sub-title">AI-Powered Attack Prevention</h1><p>Beyond detection, artificial intelligence enables proactive defense mechanisms that can prevent attacks before they compromise data integrity. Predictive analytics models assess risk factors across the database environment, identifying vulnerabilities and prioritizing remediation efforts based on potential impact. AI systems can automatically implement security policies, adjust access controls dynamically based on risk assessments, and even simulate attack scenarios to test defense mechanisms.</p><p>These prevention systems also combat the growing threat of AI-generated attacks, where malicious actors use machine learning to craft more sophisticated phishing campaigns, develop polymorphic malware, or automate the discovery of system vulnerabilities. By employing AI to understand and predict adversarial AI tactics, organizations can stay one step ahead of attackers who are themselves leveraging automation and intelligence.</p><h1 class="blog-sub-title">Securing Database Connections with Navicat</h1><p>Database management tools play a critical role in maintaining security throughout the development and administration lifecycle. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> provides several robust features designed to protect database connections and prevent unauthorized access. The platform supports SSH Tunneling and SSL/TLS encryption, which guarantee the confidentiality, integrity, and availability of data as it travels between the client and database server. This encryption ensures that even if network traffic is intercepted, the data remains unreadable to unauthorized parties.</p><p>Navicat also implements advanced authentication methods that provide multiple layers of protection against unauthorized access. These include PAM (Pluggable Authentication Modules), LDAP (Lightweight Directory Access Protocol), Kerberos authentication, Multi-Factor Authentication (MFA), and Single Sign-On (SSO) capabilities. This diversity of authentication options allows organizations to implement security policies that align with their specific compliance requirements and risk profiles, ensuring that only verified users can access sensitive database resources.</p><h1 class="blog-sub-title">Conclusion</h1><p>As artificial intelligence continues to evolve, so too will the cybersecurity landscape. Organizations must embrace AI-powered security solutions while remaining vigilant against AI-generated threats. The databases of tomorrow will need to be intelligent, adaptive, and resilient, capable of defending themselves against increasingly sophisticated attacks while enabling legitimate users to work efficiently and securely. Success in this new era requires not just technological investment, but a comprehensive strategy that combines advanced tools, robust policies, and continuous education about emerging threats and defense mechanisms.</p></body></html>]]></description>
</item>
<item>
<title>Databases in the Metaverse: Meeting New Virtual World Demands</title>
<link>https://www.navicat.com/company/aboutus/blog/3484-databases-in-the-metaverse-meeting-new-virtual-world-demands.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Databases in the Metaverse: Meeting New Virtual World Demands</title></head><body><b>Nov 28, 2025</b> by Robert Gravelle<br/><br/><p>The metaverse is a place where virtual reality, augmented reality, and persistent digital worlds converge and millions of users interact simultaneously. As these immersive environments evolve from concept to reality, they are exposing fundamental limitations in traditional database architectures and driving innovation in how we store, query, and synchronize data at unprecedented scales.</p>    <h1 class="blog-sub-title">The Spatial Database Revolution</h1><p>Traditional databases were designed for tabular data organized in rows and columns, but the metaverse operates in three-dimensional space. Every virtual object, avatar, and environmental element exists at specific coordinates in a 3D world, creating an immediate need for spatial databases that can efficiently handle geometric data. These specialized systems must answer queries like "find all users within 50 meters of this location" or "identify objects intersecting this boundary" in milliseconds, not seconds. The challenge extends beyond simple coordinate storage to encompass complex spatial relationships, collision detection, and proximity-based interactions that form the foundation of believable virtual experiences.</p><h1 class="blog-sub-title">Real-Time Synchronization at Scale</h1><p>Perhaps no challenge is more critical to metaverse databases than real-time synchronization. When thousands of users occupy the same virtual space, every movement, interaction, and state change must propagate to all relevant clients with minimal latency. Traditional database replication strategies, which might synchronize data every few seconds or minutes, simply cannot support the fluid experiences users expect. Instead, metaverse platforms require event-driven architectures with pub-sub messaging patterns, conflict resolution algorithms, and sophisticated caching layers that maintain consistency without sacrificing performance. The technical complexity multiplies when considering global deployments where users may be separated by continents yet share the same virtual room.</p><h1 class="blog-sub-title">Supporting Massive Numbers of Concurrent Users</h1><p>Conventional databases struggle when hundreds of users access the same data simultaneously. The metaverse amplifies this challenge exponentially, potentially requiring support for tens of thousands of concurrent users in a single instance. This demand has accelerated adoption of distributed database architectures that partition data across multiple nodes, employ sharding strategies based on spatial regions, and implement read replicas to distribute query loads. However, distribution introduces its own complications around data locality, cross-shard queries, and maintaining transactional integrity across a distributed system. Database architects must balance horizontal scalability against the need for strong consistency guarantees in financially critical transactions like virtual asset purchases.</p><h1 class="blog-sub-title">Leveraging Navicat for Spatial Database Management</h1><p>As organizations build out their metaverse infrastructure, tools like Navicat provide essential support for managing the complex database requirements these platforms demand. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> offers unified access to PostgreSQL databases, which serve as a foundation for many spatial implementations through the PostGIS extension. Navicat supports connection to cloud-hosted databases including Amazon RDS, Azure Database for PostgreSQL, and Google Cloud SQL, enabling developers to manage distributed metaverse databases from a single interface. Its visual query builder and data modeling capabilities help teams design efficient schemas for spatial data, while its support for Redis provides critical tools for managing the in-memory caching layers that underpin real-time synchronization. With <a class="default-links" href="https://www.navicat.com/en/products/navicat-data-modeler" target="_blank">Navicat Data Modeler</a>, database architects can visualize and optimize their spatial database structures before deployment, ensuring efficient indexing strategies for geospatial queries.</p><h1 class="blog-sub-title">Conclusion</h1><p>The metaverse is fundamentally reshaping database requirements, pushing the industry toward solutions that prioritize spatial awareness, real-time performance, and massive concurrency. As these virtual worlds mature from experimental platforms to mainstream destinations, the database technologies supporting them must also continue to evolve, incorporating lessons from gaming, distributed systems, and geospatial computing to create the high-performance foundations these immersive experiences demand.</p></body></html>]]></description>
</item>
<item>
<title>The Geospatial Database Renaissance: Transforming Location-Based Applications</title>
<link>https://www.navicat.com/company/aboutus/blog/3481-the-geospatial-database-renaissance-transforming-location-based-applications.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The Geospatial Database Renaissance: Transforming Location-Based Applications</title></head><body><b>Nov 21, 2025</b> by Robert Gravelle<br/><br/><p>The explosive growth of location-aware applications has ushered in a new era of geospatial database capabilities. What once required specialized Geographic Information Systems (GIS) and complex data processing pipelines can now be accomplished directly within mainstream database platforms like MySQL, SQL Server, and PostgreSQL. This renaissance represents a seismic shift in how organizations store, query, and analyze location-based data, opening doors to more sophisticated mapping, logistics optimization, and Internet of Things (IoT) applications.</p><h1 class="blog-sub-title">The Evolution of Mainstream Database Geospatial Support</h1><p>Traditional relational databases were designed primarily for structured, non-spatial data. However, the widespread adoption of location-based services and mobile applications has driven database vendors to integrate native spatial capabilities. Major platforms like PostgreSQL with PostGIS, MySQL's spatial extensions, Microsoft SQL Server's spatial data types, and Oracle Spatial have transformed ordinary databases into powerful geospatial engines.</p><p>This integration brings several advantages that weren't possible with separate GIS systems. Organizations can now perform complex spatial queries alongside traditional business data operations within a single database transaction. For example, a retail company can simultaneously analyze customer demographics, inventory levels, and store proximity in one unified query, eliminating the need for data synchronization between unrelated systems.</p><p>The performance improvements are equally significant. Modern spatial indexing techniques, such as R-trees and grid-based indexes, enable rapid querying of millions of geographic features. These advances make real-time location services feasible at unprecedented scales, supporting everything from ride-sharing applications to supply chain optimization systems.</p><h1 class="blog-sub-title">Specialized Geospatial Database Solutions</h1><p>While mainstream databases have gained spatial capabilities, specialized geospatial databases continue to push the boundaries of what's possible with location data. These purpose-built systems excel in scenarios requiring extreme performance, advanced spatial analytics, or handling of complex geographic data types that general-purpose databases struggle with.</p><p>Graph databases with spatial extensions, such as Neo4j's spatial procedures, excel at routing and network analysis problems. They can efficiently model transportation networks, utility infrastructures, and social relationships with geographic components. Similarly, time-series databases with spatial capabilities handle streaming location data from IoT devices, enabling real-time tracking and analysis of moving objects.</p><p>Distributed spatial databases address the challenges of managing massive geospatial datasets across multiple nodes. These systems can partition data geographically, ensuring that queries affecting specific regions are processed efficiently without unnecessary network overhead. This capability proves crucial for global applications serving users across different continents.</p><h1 class="blog-sub-title">Applications Driving the Renaissance</h1><p>The mapping and navigation industry represents the most visible application of modern geospatial databases. Companies like Google, Apple, and HERE process billions of location queries daily, requiring databases that can handle complex routing calculations, real-time traffic analysis, and point-of-interest searches with sub-second response times. These applications demand not just storage efficiency but also sophisticated query optimization for multi-dimensional spatial data.</p><p>Logistics and supply chain management have become increasingly sophisticated through geospatial database integration. Modern warehouse management systems use spatial databases to optimize picking routes, while delivery companies leverage geographic algorithms for dynamic route planning that adapts to real-time traffic conditions and delivery priorities. The integration of spatial and temporal data enables four-dimensional optimization that considers location, time, vehicle capacity, and delivery windows simultaneously.</p><p>IoT applications represent perhaps the fastest-growing segment driving geospatial database innovation. Smart city initiatives collect massive streams of location-tagged sensor data, from traffic monitors to environmental sensors. These applications require databases capable of ingesting high-velocity spatial data while simultaneously serving complex analytical queries for urban planning and real-time decision making.</p><h1 class="blog-sub-title">Navicat's Geospatial Data Management Features</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>, which supports connections to multiple database systems - including PostgreSQL, MySQL, SQL Server, and Oracle - provides comprehensive tools for managing geospatial data across different platforms. The software's unified interface allows developers and analysts to work with spatial data regardless of the underlying database technology, eliminating the learning curve associated with platform-specific tools.</p><p>The visual query builder simplifies the creation of complex database queries without requiring deep knowledge of SQL syntax. Users can construct queries through an intuitive graphical interface, which proves particularly valuable for teams where not all members have extensive database programming experience. When working with databases that contain geospatial data, this visual approach can help users navigate the additional complexity that spatial queries often involve.</p><p>Navicat's data modeling capabilities allow users to design database schemas through visual ER diagrams that represent table relationships and database structure. The tool provides reverse engineering functionality to load existing database structures and create visual models, along with the ability to generate documentation for database designs. These features prove valuable when working with any complex database schema, including those that incorporate geospatial data alongside traditional business data.</p><h1 class="blog-sub-title">Conclusion</h1><p>The geospatial database renaissance represents more than just technological advancement; it signifies a fundamental shift toward location-aware computing as a standard capability rather than a specialized niche. As IoT devices proliferate and mobile applications become increasingly sophisticated, the ability to efficiently store, query, and analyze spatial data within mainstream database systems will become even more critical.</p><p>Organizations that embrace these enhanced capabilities today position themselves to leverage location intelligence as a competitive advantage. Whether optimizing delivery routes, analyzing customer behavior patterns, or managing smart city infrastructure, the convergence of spatial and traditional data analytics opens unprecedented opportunities for data-driven decision making. The tools and platforms supporting this renaissance continue to evolve, promising even more powerful and accessible geospatial capabilities in the years ahead.</p></body></html>]]></description>
</item>
<item>
<title>Monetizing Data Assets: A Guide to Database Marketplaces and Sharing</title>
<link>https://www.navicat.com/company/aboutus/blog/3468-monetizing-data-assets-a-guide-to-database-marketplaces-and-sharing.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Monetizing Data Assets: A Guide to Database Marketplaces and Sharing</title></head><body><b>Nov 14, 2025</b> by Robert Gravelle<br/><br/><p>As the world's economies become increasingly data-driven, organizations have begun to recognize that their competitive advantage lies not just in collecting data, but in their ability to access, share, and monetize diverse datasets securely. Database marketplaces have emerged to facilitate this exchange, enabling organizations to unlock new revenue streams while maintaining stringent security standards.</p><h1 class="blog-sub-title">The Rise of Database Marketplaces</h1><p>Database marketplaces are sophisticated platforms that serve as intermediaries between data providers and data consumers, creating environments where organizations can securely buy, sell, and share datasets. These platforms function much like traditional marketplaces, but instead of physical goods, they facilitate the exchange of valuable data assets. The global data marketplace platform market, which supports the buying and selling of various data types, was valued at USD 1.49 billion in 2024, with projections indicating significant growth to reach USD 5.73 billion by 2030.</p><p>Unlike simple file-sharing systems, database marketplaces provide comprehensive infrastructure that includes data cataloging, quality assessment, access controls, and monetization mechanisms. They enable organizations to discover relevant datasets, evaluate data quality before purchase, and integrate external data seamlessly into their existing analytics workflows. According to Gartner, by 2024, 90% of large organizations will use external data to enhance their analytics, marking a significant shift in how businesses approach decision-making.</p><h1 class="blog-sub-title">Key Benefits and Value Proposition</h1><p>The adoption of database marketplaces offers compelling advantages that extend far beyond simple data access. For data providers, these platforms create new revenue opportunities by monetizing previously underutilized data assets. Organizations can transform their internal datasets into valuable products, generating direct income while maintaining control over how their data is used and distributed.</p><p>Data consumers benefit from access to high-quality, curated datasets that would be impossible or prohibitively expensive to collect independently. Data vendors can share complete and custom datasets and databases for a fixed price via an S3 bucket, with buyers then having unlimited access and ownership of analysis-ready datasets. This broader access to data enables smaller organizations to compete with larger enterprises by leveraging external datasets to enhance their analytics capabilities.</p><p>The platforms also provide significant operational efficiencies by standardizing data sharing processes, reducing the time and complexity typically associated with data partnerships. Organizations no longer need to negotiate individual data-sharing agreements or build custom integration solutions for each data source.</p><h1 class="blog-sub-title">Leading Platforms and Market Landscape</h1><p>The database marketplace is made up of various specialized platforms, each serving different market segments and use cases. Snowflake Data Marketplace stands as one of the most prominent examples, leveraging Snowflake's cloud data platform to enable seamless data sharing without requiring data movement. This approach ensures that data remains secure within the provider's environment while still being accessible to authorized consumers.</p><p>Other significant players include AWS Data Exchange, which integrates with Amazon's broader cloud offerings, and specialized platforms like Datarade, which focuses on commercial data transactions. For thousands of companies including Google, BCG and Pepsico, Datarade Marketplace is the easiest way to find the right data sources. The market also includes emerging platforms like Opendatabay, which offers a vast collection of curated, synthetic, premium, and open datasets to fuel data analysis, AI, and LLM applications.</p><p>North America data marketplace platform market led the overall data marketplace platform industry in 2024, with a share of more than 35%, indicating the region's leadership in data monetization and sharing initiatives.</p><h1 class="blog-sub-title">Security and Privacy Considerations</h1><p>Security remains paramount in database marketplace operations, as organizations must balance data sharing benefits with the need to protect sensitive information. Modern platforms implement multiple layers of security, including advanced encryption, access controls, and audit trails that track all data interactions. These systems ensure that data providers maintain visibility into how their datasets are being used while protecting against unauthorized access.</p><p>Privacy considerations are equally critical, particularly with increasing regulatory requirements like GDPR and CCPA. Data marketplaces may provide individuals with options to control how their data is used and to opt out of having their data sold or shared, including providing clear privacy settings, preferences for data capture and storage, and opt-out mechanisms. Successful platforms implement privacy-by-design principles, ensuring that personal data protection is built into their architecture from the ground up.</p><p>Many platforms also support synthetic data generation and anonymization techniques, allowing organizations to share valuable insights without exposing sensitive personal information. This approach enables broader data sharing while maintaining compliance with privacy regulations.</p><h1 class="blog-sub-title">How Navicat Facilitates Data Sharing</h1><p>Navicat's comprehensive database management capabilities play a crucial role in enabling effective data sharing within database marketplaces. In March of 2025, support for Snowflake was also integrated, providing users with robust tools for managing cloud-based data solutions, making it particularly relevant for organizations participating in modern data marketplaces.</p><p>The platform's multi-database connectivity enables seamless data movement and synchronization across different database systems, which is essential when participating in data marketplaces that may use various underlying technologies. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> enables seamless connections to multiple databases, including MySQL, PostgreSQL, MongoDB, MariaDB, SQL Server, Oracle, SQLite, Redis, and Snowflake, all from a single application.</p><p>Navicat's security features provide the foundation for safe data sharing practices. Advanced authentication methods, including PAM, LDAP, Kerberos, MFA, SSO, provide multiple layers of protection against unauthorized access, while SSH Tunneling and SSL/TLS guarantee the confidentiality, integrity, and availability of data. These security capabilities are essential when organizations need to ensure that their data sharing activities meet enterprise security standards.</p><p>Additionally, when logged into <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a>, shared objects behave exactly like local ones and can be viewed, edited, and deleted directly in Navicat, facilitating collaborative data management that supports marketplace participation.</p><h1 class="blog-sub-title">Looking Forward</h1><p>Database marketplaces represent a fundamental shift in how organizations approach data as a strategic asset. As the market continues to mature, we can expect to see increased specialization, with platforms focusing on specific industries or data types. The integration of artificial intelligence and machine learning capabilities will also enhance data discovery and quality assessment processes, making it easier for organizations to find and evaluate relevant datasets.</p><p>The future success of database marketplaces will depend on their ability to balance accessibility with security, ensuring that data sharing becomes more efficient while maintaining the trust and compliance standards that organizations require. As these platforms evolve, they will likely become essential infrastructure for data-driven organizations, enabling new forms of collaboration and innovation that were previously impossible.</p></body></html>]]></description>
</item>
<item>
<title>Database-as-Code: Extending Infrastructure-as-Code to Database Management</title>
<link>https://www.navicat.com/company/aboutus/blog/3463-database-as-code-extending-infrastructure-as-code-to-database-management.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Database-as-Code: Extending Infrastructure-as-Code to Database Management</title></head><body><b>Nov 7, 2025</b> by Robert Gravelle<br/><br/><p>Since its inception about a quarter century ago, Infrastructure-as-Code (IaC) has revolutionized how we manage and deploy infrastructure resources. This approach treats infrastructure configuration as code by introducing version control, automated deployment, and consistent environments. Database-as-Code (DaC) extends these same principles to database schema management, bringing the benefits of version control and deployment automation to one of the most critical components of any application stack.</p><h1 class="blog-sub-title">Some Database-as-Code Fundamentals</h1><p>Database-as-Code represents a fundamentally new approach to database management that moves away from traditional manual practices. Instead of manually executing SQL scripts or using graphical tools to modify database schemas, DaC treats database structure and changes as code artifacts that can be versioned, reviewed, and deployed through automated pipelines.</p><p>We can compare it to building a house: with traditional database management, different contractors show up and make changes without blueprints or documentation. Meanwhile, Database-as-Code is like having detailed architectural plans that everyone follows, with every change documented and approved before implementation. This approach ensures that your database schema evolves predictably and consistently across all environments.</p><p>The core principle involves storing all database schema definitions, migration scripts, and configuration files in version control systems alongside your application code. This creates a single source of truth for your database structure and enables you to track exactly how your database has evolved over time.</p><h1 class="blog-sub-title">Key Components and Implementation Approaches</h1><p>Database-as-Code encompasses several essential components that work together to create a comprehensive database management strategy:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li><strong>Schema definitions</strong> form the foundation, typically written in SQL DDL (Data Definition Language) statements or domain-specific languages that describe table structures, indexes, constraints, and relationships.</li>        <li><strong>Migration scripts</strong> handle the transformation of your database from one version to another. These scripts are carefully crafted to be both forward and backward compatible when possible, ensuring smooth deployments and rollback capabilities. Each migration is numbered sequentially and contains both upgrade and downgrade instructions.</li>        <li><strong>Deployment automation</strong> ties everything together through continuous integration and continuous deployment (CI/CD) pipelines. These automated workflows validate schema changes, run tests against sample data, and deploy approved changes to target environments. The automation ensures that human error is minimized and that all environments remain synchronized.</li>        <li><strong>Version control integration</strong> allows teams to collaborate on database changes just like application code. Pull requests enable peer review of schema modifications, and branching strategies can be employed to manage feature development and hotfixes. This collaborative approach helps catch potential issues before they reach production environments.</li></ul><h1 class="blog-sub-title">Tools and Implementation Support</h1><p>Modern database management tools have evolved to support Database-as-Code workflows effectively. Tools like Liquibase and Flyway provide frameworks for managing database migrations and schema versioning. These platforms offer database-agnostic approaches that work across multiple database systems while maintaining consistent workflows.</p><p>Cloud platforms and containerization technologies have also embraced Database-as-Code principles, offering managed services that integrate seamlessly with version control systems and deployment pipelines. These tools reduce the operational overhead of implementing Database-as-Code while providing enterprise-grade reliability and scalability.</p><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> enhances Database-as-Code practices by providing tools that facilitate the management and generation of database schema and data using code-based approaches. Navicat supports DaC principles in several key ways:</p><h3>SQL Generation from Visual Tools</h3><p>Navicat's visual query builder, data modeling tools, and stored procedure builder allow users to design and manage database objects graphically. These visual operations are then translated into the corresponding SQL scripts, which can be version-controlled as part of a DaC workflow.</p><h3>Code Snippets and Automation</h3><p>The Code Snippets feature allows users to save and reuse common SQL statements and code blocks, promoting consistency and reducing manual coding. Additionally, features like batch jobs and automated data synchronization can be configured and scheduled, enabling automated database tasks that align with DaC principles.</p><h3>Data Migration and Synchronization</h3><p>Navicat offers streamlined wizards for data migration and synchronization, which can be utilized to manage data changes in a controlled and repeatable manner, a key aspect of DaC. The generated SQL scripts from these operations can also be incorporated into a version control system.</p><h3>SQL Editor Features</h3><p>The SQL editor in Navicat provides features like code completion, syntax highlighting, and SQL beautifier, enhancing the efficiency and quality of manually written SQL code. This supports the creation of clean and maintainable SQL scripts for DaC.</p><h3>Data Modeling and Schema Export</h3><p>Navicat Data Modeler allows for the visual design of database schemas and the export of these designs as SQL scripts. This provides a code-based representation of the database structure that can be versioned and deployed.</p><h1 class="blog-sub-title">Conclusion</h1><p>Database-as-Code represents a natural evolution of Infrastructure-as-Code principles, transforming database management from a manual, error-prone process into a reliable, automated system that supports modern development workflows. While the initial transition may seem challenging for organizations with established processes, the long-term benefits of improved consistency, traceability, and collaboration far outweigh the learning curve. By adopting Database-as-Code practices, organizations can transform their database management from a manual, error-prone process into a reliable, automated system that supports modern development workflows and business requirements.</p></body></html>]]></description>
</item>
<item>
<title>Introduction to Cross-Database Query Engines</title>
<link>https://www.navicat.com/company/aboutus/blog/3458-introduction-to-cross-database-query-engines.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Introduction to Cross-Database Query Engines</title></head><body><b>Oct 31, 2025</b> by Robert Gravelle<br/><br/><p>Modern organizations often find themselves managing information across multiple database systems, each serving different purposes and storing various types of data. Traditional approaches require separate connections and queries for each database, creating complexity and inefficiency. Cross-database query engines have emerged as powerful solutions to these issues, enabling seamless data integration and analysis across diverse storage systems through a single SQL interface.</p><h1 class="blog-sub-title">How Cross-Database Query Engines Work</h1><p>Cross-database query engines are specialized software platforms that provide a unified SQL interface for querying data across multiple, heterogeneous data sources simultaneously. Think of these engines as universal translators that can speak to different database languages while presenting a consistent interface to users. They abstract away the complexity of individual database systems, allowing data analysts and engineers to write standard SQL queries that can retrieve and combine data from various sources including relational databases, NoSQL systems, cloud storage, and even streaming data platforms.</p><p>The fundamental architecture of these engines typically involves a coordinator node that receives SQL queries, parses them, and creates an execution plan. This plan is then distributed across worker nodes that connect to the actual data sources, retrieve the necessary data, and perform the required computations. The results are then aggregated and returned to the user, all while maintaining the illusion of querying a single, unified database.</p><h1 class="blog-sub-title">Leading Cross-Database Query Engines</h1><p>Trino, formerly known as Presto, stands as one of the most prominent cross-database query engines in the market today. Originally developed by Facebook to handle their massive data analytics needs, Trino excels at interactive analytics and can query data sources ranging from traditional MySQL and PostgreSQL databases to modern systems like Apache Kafka, Amazon S3, and Elasticsearch. Its distributed architecture allows it to process queries across petabytes of data with impressive performance characteristics.</p><p>Apache Drill represents another significant player in this space, designed with a schema-free approach that allows users to query data without requiring predefined schemas. This flexibility makes Drill particularly valuable when working with semi-structured data formats like JSON, Parquet, and Avro files. Drill's self-service data exploration capabilities enable users to start analyzing data immediately without waiting for database administrators to define table structures.</p><p>Other notable engines include Apache Spark SQL, which combines cross-database querying with powerful data processing capabilities, and Dremio, which focuses on self-service data analytics with an emphasis on data virtualization and acceleration.</p><h1 class="blog-sub-title">Key Benefits and Use Cases</h1><p>Cross-database query engines deliver several compelling advantages that address common data management challenges. First, they dramatically simplify data integration by eliminating the need to move data between systems before analysis. This approach, known as data virtualization, reduces storage costs and ensures that users always work with the most current data available.</p><p>Performance benefits emerge from the engines' ability to push computations down to the data sources themselves, minimizing data movement across networks. Advanced query optimization techniques, including predicate pushdown and intelligent join ordering, ensure that queries execute efficiently even when spanning multiple systems.</p><p>From a business perspective, these engines accelerate time-to-insight by removing technical barriers that previously required extensive ETL (Extract, Transform, Load) processes. Data analysts can focus on deriving insights rather than wrestling with data integration challenges. Common use cases include real-time dashboards that combine transactional and analytical data, compliance reporting that aggregates data from multiple business systems, and exploratory data analysis that requires access to diverse data sources.</p><h1 class="blog-sub-title">Navicat Premium for Cross-Database Management</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> serves as an excellent complementary tool for organizations implementing cross-database query strategies. While cross-database query engines handle the heavy lifting of distributed query execution, Navicat Premium provides a user-friendly graphical tool for managing multiple database connections and performing cross-database operations. The platform supports a wide variety of different database types, allowing users to establish connections to various systems from a single interface.</p><p>Navicat Premium's cross-database query capabilities enable users to write and execute queries that span multiple databases without requiring the complex setup of dedicated query engines. For smaller-scale operations or development environments, this functionality provides immediate value. Additionally, Navicat's data synchronization and migration tools complement query engines by facilitating the movement and harmonization of data structures across different systems when needed.</p><h1 class="blog-sub-title">Conclusion</h1><p>Cross-database query engines represent a transformative approach to modern data analytics, breaking down traditional barriers between disparate systems and enabling organizations to derive insights from their complete data landscape. As data continues to grow in volume and variety, these engines will become increasingly essential for maintaining competitive advantage through data-driven decision making. The combination of powerful distributed query engines with intuitive management tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> creates a winning combination that empowers users to unlock the full potential of their organizational data assets.</p></body></html>]]></description>
</item>
<item>
<title>The Rise of Self-Tuning Database Systems</title>
<link>https://www.navicat.com/company/aboutus/blog/3453-the-rise-of-self-tuning-database-systems.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The Rise of Self-Tuning Database Systems</title></head><body><b>Oct 17, 2025</b> by Robert Gravelle<br/><br/><p>Database performance has always been the backbone of successful applications, but traditionally, keeping databases running at peak efficiency has required the expertise of seasoned database administrators working around the clock. Now, artificial intelligence is able to automate database tuning systems by optimizing your database configurations, index strategies, and query execution plans without human intervention. This article explores how these intelligent systems work, examines their practical benefits for modern organizations, and discusses why combining automated optimization with human expertise creates the most effective approach to database performance management.</p><h1 class="blog-sub-title">The Challenges of Traditional Database Tuning</h1><p>Before we dive into the automated solutions, let's establish why database tuning has historically been such a complex undertaking. A database is not unlike a busy restaurant kitchen during the dinner rush. The kitchen staff needs to coordinate perfectly  knowing which ingredients to prep, how to arrange workstations, and which orders to prioritize  to serve customers efficiently. Similarly, databases must juggle multiple concurrent queries, manage memory allocation, and decide how to access data most efficiently.</p><p>Traditional database tuning requires administrators to manually analyze performance metrics, identify bottlenecks, and adjust dozens of configuration parameters. This process demands deep expertise and constant vigilance, as database workloads can shift dramatically throughout the day. A configuration that works perfectly during morning batch processing might cause significant slowdowns when interactive users flood the system in the afternoon.</p><h1 class="blog-sub-title">How AI-Powered Database Tuning Works</h1><p>Automated database tuning systems function like having an incredibly observant and quick-learning assistant who never sleeps. These AI-powered solutions continuously monitor your database's performance characteristics, analyzing patterns in query execution, resource utilization, and response times. The system builds a comprehensive understanding of your database's behavior under different conditions, much like how a seasoned driver learns to navigate traffic patterns on their daily commute.</p><p>The artificial intelligence component employs machine learning algorithms to identify optimization opportunities that might escape human notice. For instance, the system might discover that creating a composite index on seemingly unrelated columns dramatically improves performance for a specific subset of queries that run frequently during certain hours. These insights emerge from analyzing vast amounts of performance data that would be overwhelming for human administrators to process manually.</p><p>When the system identifies an optimization opportunity, it can automatically implement changes such as adjusting buffer pool sizes, modifying query execution strategies, or creating new indexes. Crucially, these systems include safety mechanisms that allow them to roll back changes if performance degrades, ensuring that automated improvements never compromise system stability.</p><h1 class="blog-sub-title">The Benefits of Continuous Optimization</h1><p>The advantages of automated database tuning extend far beyond simply reducing administrative overhead. Consider how your smartphone automatically adjusts screen brightness based on ambient lighting  automated database tuning provides similar adaptive intelligence for your data infrastructure. The system responds to changing workload patterns in real-time, optimizing performance for current conditions rather than relying on static configurations that might have been appropriate weeks or months ago.</p><p>This continuous optimization approach proves particularly valuable for organizations with fluctuating workloads. An e-commerce platform, for example, might experience dramatically different database usage patterns during holiday shopping seasons compared to typical business periods. Automated tuning systems adapt seamlessly to these variations, ensuring optimal performance regardless of load characteristics.</p><p>Additionally, automated systems can identify and resolve performance issues before they impact end users. By analyzing trends and patterns, these solutions often detect emerging bottlenecks and implement preventive measures, much like how modern cars can predict when maintenance will be needed based on driving patterns and component wear.</p><h1 class="blog-sub-title">Why the Human Touch Still Matters</h1><p>Despite the impressive capabilities of automated database tuning systems, they don't completely replace the need for manual oversight and optimization. While automated database tuning systems handle routine operations excellently, experienced administrators remain essential for complex situations and strategic decision-making by bringing contextual understanding that automated systems cannot fully replicate. They understand business requirements, anticipate upcoming changes in application usage patterns, and can make strategic decisions about database architecture that go beyond performance optimization. For instance, a DBA might recognize that certain performance issues stem from fundamental design problems that require application-level changes rather than database tuning.</p><p>This is where specialized monitoring tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a> prove invaluable for bridging the gap between automated optimization and human expertise. Navicat Monitor provides database professionals with comprehensive performance monitoring and analysis capabilities that complement automated tuning systems. The platform enables administrators to build custom metrics that track specific performance indicators relevant to their database environments, while its Query Analyzer offers graphical representations of query logs and detailed performance statistics. When automated systems make recommendations or implement changes, Navicat Monitor's visualization tools and alert mechanisms help administrators understand the impact and rationale behind these optimizations, ensuring that human expertise remains an integral part of the database management process.</p><figure>  <figcaption>Navicat Monitor Query Analyzer</figcaption>  <img alt="Navicat Monitor Query Analyzer" src="https://www.navicat.com/link/Blog/Image/2025/20251017/Screenshot_Navicat_Monitor_Query_Analyzer.png" /></figure><h1 class="blog-sub-title">Conclusion</h1><p>Automated database tuning represents a significant leap forward in how we manage database performance, offering the promise of continuously optimized systems that adapt to changing conditions without constant human intervention. While these AI-powered solutions handle routine optimization tasks with impressive efficiency, the combination of automated intelligence and human expertise creates the most robust approach to database management. As organizations increasingly rely on data-driven decision making, automated database tuning systems will become essential tools for maintaining the high-performance, reliable database infrastructure that modern applications demand.</p></body></html>]]></description>
</item>
<item>
<title>A Beginner's Guide to GraphQL</title>
<link>https://www.navicat.com/company/aboutus/blog/3444-a-beginner-s-guide-to-graphql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Beginner's Guide to GraphQL</title></head><body><b>Oct 8, 2025</b> by Robert Gravelle<br/><br/><p>In the world of web development, REST APIs have long been the standard for client-server communication. Now, a newer technology called GraphQL is reshaping how developers think about data retrieval and API design. Understanding GraphQL is becoming increasingly important for both web and developers developers who want to build efficient, flexible applications. For database developers specifically, GraphQL represents a fundamental shift in how applications interact with data stores. Rather than building multiple database queries to satisfy different API endpoints, GraphQL enables you to design your database schema in a way that directly mirrors your API structure. This alignment between database design and API consumption patterns means that database developers can create more intuitive, performant data access layers.</p><h1 class="blog-sub-title">What is GraphQL?</h1><p>GraphQL, which stands for "Graph Query Language", is both a query language for APIs and a runtime for executing those queries. Developed by Facebook in 2012 and open-sourced in 2015, GraphQL provides a more efficient, powerful, and flexible alternative to traditional REST API architectures.</p><p>You can compare GraphQL to a smart waiter at a restaurant. Instead of bringing you a pre-set meal (like REST endpoints that return fixed data structures), GraphQL lets you specify exactly what ingredients you want on your plate. You can request just the appetizer, or combine elements from different courses, all in a single request. This analogy helps illustrate GraphQL's core strength: giving clients precise control over the data they receive.</p><p>The "Graph" in GraphQL refers to how it models data as an interconnected network of relationships, much like how information connects in real-world scenarios. Rather than thinking in terms of multiple endpoints, GraphQL treats your entire API as a single, queryable graph of data.</p><h1 class="blog-sub-title">Key Advantages of GraphQL</h1><p>GraphQL addresses several pain points that developers commonly encounter with traditional REST APIs. The most significant advantage is the elimination of over-fetching and under-fetching data. With REST, you might request user information and receive everything about that user, even if you only need their name and email. GraphQL allows you to request exactly the fields you need, reducing bandwidth usage and improving performance.</p><p>Another major benefit is the reduction of multiple API calls. In REST architectures, displaying a user's profile with their posts and comments might require three separate requests. GraphQL enables you to fetch all this related data in a single query, significantly reducing network overhead and improving application speed.</p><p>GraphQL also provides strong typing and introspection capabilities. The schema acts as a contract between the client and server, clearly defining what data is available and how it can be queried. This self-documenting nature makes APIs easier to understand and work with, while the type system helps catch errors early in development.</p><h1 class="blog-sub-title">How GraphQL Works</h1><p>At its core, GraphQL operates through a schema that defines the structure of your data and the operations that can be performed. This schema serves as the single source of truth for your API, describing what data is available, how it's connected, and what operations clients can perform.</p><p>When a client makes a GraphQL query, it specifies exactly which fields it wants from which types. The GraphQL runtime then validates this query against the schema and executes it by calling resolver functions. These resolvers are responsible for fetching the actual data, whether from databases, other APIs, or any other data source.</p><p>The beauty of this approach lies in its flexibility. The same GraphQL endpoint can serve vastly different queries, each returning only the data requested. This eliminates the need for multiple endpoints while providing fine-grained control over data retrieval.</p><h1 class="blog-sub-title">Working with GraphQL Using Navicat</h1><p>While GraphQL offers powerful capabilities for API development, the effectiveness of any GraphQL implementation ultimately depends on the quality and performance of its underlying data sources. This is where database management tools like <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a> become essential.</p><p>Navicat excels at managing the diverse range of databases that commonly serve as GraphQL backends. Whether your GraphQL resolvers are fetching data from PostgreSQL, MySQL, MongoDB, or Redis, having robust database management capabilities is crucial for GraphQL success. You can use Navicat to optimize your database schemas, monitor query performance, and ensure your data structures are designed to efficiently support the queries that GraphQL applications often require.</p><p>The relationship between GraphQL and your database layer is particularly important to understand. Since GraphQL resolvers can trigger multiple database queries to fulfill a single API request, database performance becomes even more critical than in traditional REST architectures. <a class="default-links" href="https://www.navicat.com/products/navicat-monitor" target="_blank">Navicat's database monitoring and optimization</a> features help you identify bottlenecks, optimize indexes, and structure your data in ways that minimize the database load when serving GraphQL queries.</p><h1 class="blog-sub-title">Conclusion</h1><p>GraphQL represents a significant evolution in API design, offering developers more control, efficiency, and flexibility than traditional approaches. By allowing precise data fetching, reducing network overhead, and providing strong typing, GraphQL addresses many of the challenges that have long plagued API development. As you explore this technology, tools like <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a> can significantly ease your development process, whether you're working directly with GraphQL APIs or managing the databases that support them. Understanding GraphQL is becoming essential for modern developers, and now is an excellent time to begin incorporating it into your development toolkit.</p></body></html>]]></description>
</item>
<item>
<title>The Case for a Universal AI Tool in the Big Data Era</title>
<link>https://www.navicat.com/company/aboutus/blog/3443-the-case-for-a-universal-ai-tool-in-the-big-data-era.html</link>
<description><![CDATA[<!doctype html><html><head><title>The Case for a Universal AI Tool in the Big Data Era</title></head><body><b>Oct 3, 2025</b> by G2<br/><br/><p>In todays AI and big data era, organizations are facing unprecedented complexity in how they manage, analyze, and secure their data. Database professionals are expected to deliver insights faster than ever while navigating fragmented environments across relational, NoSQL, and cloud-native systems. Fragmented tools often create inefficiency and errors, making it clear that what modern teams need is not just another tool, but a universal AI-driven platform that integrates and streamlines the entire database lifecycle. Solutions like Navicat Premium demonstrate how universal AI tools can help organizations thrive in this new era by boosting productivity, simplifying management, and empowering teams to make smarter decisions.</p><h1 class="blog-sub-title">The Evolving Role of Database Professionals</h1><p>Database administrators (DBAs) and developers juggle responsibilities like data modeling, query optimization, performance tuning, and security. These tasks are often repetitive, time-consuming, and prone to human errorespecially when working across multiple systems. AI-powered tools address these challenges by automating routine work, providing intelligent recommendations, and allowing professionals to focus on higher-value strategy and innovation.</p><h1 class="blog-sub-title">The Case for a Universal AI Tool</h1><p>Many organizations use multiple database platforms, BI tools, and admin dashboards. This fragmentation slows teams, creates inefficiencies, and complicates collaboration. A universal AI-driven platform consolidates these workflows into a single environment, enabling automation, natural language interaction, and predictive insights. By providing one hub for managing all database operations, universal AI tools reduce friction and accelerate results. Navicat Premium embodies this vision by unifying diverse databases while embedding AI-driven productivity features.</p><h1 class="blog-sub-title">Streamlining Workflows with AI Automation</h1><p>AI can automate schema design, query generation, and performance monitoring. Intelligent query builders analyze input and suggest optimized SQL queries, while predictive systems recommend index or schema adjustments before issues arise. This saves hours of manual debugging and ensures that teams are always working at peak efficiency.</p><h1 class="blog-sub-title">Simplifying Multi-Database Management</h1><p>With hybrid and cloud-native architectures, teams often manage relational and NoSQL databases simultaneously. A universal AI tool like Navicat Premium eliminates the need to switch between interfaces by offering a single platform to query, visualize, and synchronize data across systems. Natural language processing enables even non-technical staff to ask questions like, 'Show me the top customers this quarter,' and instantly see results.</p><h1 class="blog-sub-title"><a class="default-links" href="https://www.navicat.com/en/" target="_blank">Navicat Premium</a>: Thriving in the AI Era</h1><p>Navicat Premium supports MySQL, PostgreSQL, SQL Server, Oracle, Snowflake, MongoDB, Redis and cloud databases in one environment. Its AI-driven features simplify complex SQL creation through natural language, detect inefficiencies in database structures, and recommend optimizations like index adjustments. With automation for synchronization, backup, and cross-database migration, Navicat Premium positions itself as the universal AI-driven platform professionals can rely on.</p><img src="https://www.navicat.com/link/Blog/Image/2025/20251003/Screenshot_Navicat_17_Premium_Windows_Main_screen.png" style="max-width: 900px"><h1 class="blog-sub-title">Enhancing Decision-Making with Predictive Insights</h1><p>Beyond automation, AI offers predictive analytics to anticipate performance bottlenecks or failures before they occur. Machine learning models trained on usage patterns help DBAs proactively tune systems, scale resources, and maintain reliability. This predictive capability ensures better cost management in cloud environments while minimizing downtime.</p><h1 class="blog-sub-title">Improving Collaboration and Accessibility</h1> <p>AI-driven tools bridge the gap between technical experts and business users. Navicat Premiums visual dashboards and natural language interface enable analysts, managers, and DBAs to collaborate on insights without relying solely on SQL expertise. This democratizes access to data and accelerates decision-making across the organization.</p><h1 class="blog-sub-title">Ensuring Data Security and Compliance</h1><p>AI enhances security by flagging anomalies in real time, such as unusual logins or sensitive data queries. Navicats automated audit and compliance features reduce manual workload while helping teams stay aligned with regulations like GDPR and HIPAA.</p><h1 class="blog-sub-title">Real-World Impact: A Case Study</h1><p>A mid-sized e-commerce company managing multiple relational and NoSQL systems adopted Navicat Premium. The results were clear: AI-optimized queries reduced execution time by 30%, the unified interface eliminated tool-switching, and predictive alerts prevented downtime. Business users created their own reports through visual dashboards, freeing DBAs to focus on innovation. Together, these benefits saved the company hours each week while improving customer experience.</p><h1 class="blog-sub-title">The Future of Universal AI Tools</h1><p>As AI capabilities advance, the next frontier for universal tools like Navicat Premium lies in deeper BI visualization, more intuitive natural language interfaces, and smarter predictive analytics. In the future, universal AI-driven platforms wont just support teamsthey will guide them proactively, shaping strategies and keeping organizations ahead in the AI era.</p><h1 class="blog-sub-title">Conclusion</h1><p>In an age defined by AI and big data, fragmented tools are no longer enough. Organizations need universal AI-driven platforms that consolidate, automate, and empower. Navicat Premium (<a class="default-links" href="https://www.navicat.com/en/products/navicat-premium-lite" target="_blank">Free version is available</a>) is at the forefront of this shift, enabling database professionals to work smarter, collaborate better, and deliver faster. The future belongs to teams equipped with universal AI toolsand Navicat is leading the way.</p></body></html>]]></description>
</item>
<item>
<title>Conversational Database Interfaces: From SQL to Natural Language Database Interaction</title>
<link>https://www.navicat.com/company/aboutus/blog/3438-conversational-database-interfaces-from-sql-to-natural-language-database-interaction.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Conversational Database Interfaces: From SQL to Natural Language Database Interaction</title></head><body><b>Sep 26, 2025</b> by Robert Gravelle<br/><br/><p>Conversational database interfaces represent a cutting edge approach to data interaction, powered by large language models that enable users to query databases using plain English rather than via complex SQL commands. Think of these interfaces as intelligent translators that sit between you and your database, converting your natural language questions into precise database queries and then presenting the results in an easily understandable format.</p><p>These systems leverage advanced natural language processing capabilities to understand context, intent, and nuance in human speech patterns. When you ask a question like "Show me all customers who made purchases over $1000 last month," the interface analyzes your request, identifies the relevant tables and columns, constructs the appropriate SQL query, executes it, and returns the results in a conversational manner. This technology levels the playing field by removing the technical barrier that has traditionally separated business users from their data. In this article, we'll explore how these revolutionary interfaces work, examines the key differences between conversational systems and NoSQL databases, and demonstrates how modern database management tools like <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a> support this technological innovation.</p><h1 class="blog-sub-title">The Technology Behind Natural Language Queries</h1><p>Large language models serve as the foundation for these conversational interfaces, having been trained on vast amounts of text data that includes both natural language and structured query languages. These models understand the relationships between everyday language and database operations, enabling them to perform complex translations between human intent and machine-executable commands.</p><p>The process involves several sophisticated steps that happen seamlessly in the background. First, the system parses your natural language input to identify key entities, relationships, and operations. Then it maps these elements to your specific database schema, understanding which tables contain the relevant information and how they relate to each other. Finally, it constructs and executes the appropriate query while handling potential ambiguities or errors gracefully.</p><p>Modern implementations often include context awareness, allowing for follow-up questions and maintaining conversation history. This means you can ask a follow-up question like "What about the previous year?" and the system understands you're referring to the same customer purchase data from your earlier query.</p><h1 class="blog-sub-title">NoSQL versus Conversational Interfaces</h1><p>Understanding the difference between NoSQL databases and conversational database interfaces is crucial for grasping how these technologies complement rather than compete with each other. This distinction often confuses newcomers to database technology because both represent departures from traditional database interactions, but they address entirely different aspects of data management.</p><p>NoSQL databases fundamentally change how data is stored and organized. Unlike traditional relational databases that store information in structured tables with predefined relationships, NoSQL systems embrace flexible, schema-less approaches. Document databases like MongoDB store information as JSON-like documents, while graph databases like Neo4j represent data as interconnected nodes and relationships. These systems excel at handling unstructured data, scaling horizontally across multiple servers, and adapting to changing data requirements without rigid schema constraints.</p><p>Conversational database interfaces, on the other hand, revolutionize how users interact with stored data, regardless of the underlying storage mechanism. These interfaces can work equally well with traditional SQL databases, NoSQL systems, or hybrid architectures. The key insight is that conversational interfaces address the user experience layer, while NoSQL addresses the data storage layer. You might have a conversational interface that allows natural language queries against a MongoDB document database, combining the flexibility of NoSQL storage with the accessibility of natural language interaction.</p><h1 class="blog-sub-title">Leveraging Database Management Tools for Conversational Interfaces</h1><p><a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a> provides comprehensive support for working with databases that implement conversational interfaces, offering a bridge between traditional database management and modern natural language query capabilities. The platform's intuitive design philosophy aligns perfectly with the accessibility goals of conversational database systems, providing visual tools that complement natural language interactions.</p><p>Through Navicat's unified interface, database administrators and developers can manage the underlying database structures that support conversational interfaces while also testing and refining the natural language processing capabilities. The tool's connection management features make it easy to work with various database systems that might be powering conversational interfaces, from traditional MySQL and PostgreSQL installations to modern NoSQL systems like MongoDB or cloud-based solutions.</p><p>Navicat's query building and visualization tools become particularly valuable when developing and debugging conversational database interfaces, allowing teams to understand exactly how natural language queries translate into database operations and optimize performance accordingly.</p><h1 class="blog-sub-title">Conclusion</h1><p>Conversational database interfaces powered by large language models represent a fundamental shift toward more accessible and intuitive data interaction. By removing the technical barriers traditionally associated with database queries, these systems enable broader organizational participation in data-driven decision making. As this technology continues to evolve, the combination of flexible storage solutions, intelligent query interfaces, and comprehensive management tools are making data truly accessible to users regardless of their technical expertise.</p></body></html>]]></description>
</item>
<item>
<title>Introducing Navicat On-Prem Server 3.0</title>
<link>https://www.navicat.com/company/aboutus/blog/3431-introducing-navicat-on-prem-server-3-0.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Introducing Navicat On-Prem Server 3.0</title></head><body><b>Sep 19, 2025</b> by Robert Gravelle<br/><br/><p>On Oct 28, 2024, the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/2803-seamless-mysql-and-mariadb-management-with-navicat-on-prem-server" target="_blank">Seamless MySQL and MariaDB Management with Navicat On-Prem Server</a> blog installment introduced an on-premise solution that allows distributed teams to collaborate in real time, share data, coordinate tasks, and communicate seamlessly through a centralized platform. It's one of two Navicat offerings for Collaboration - the other being <a class="default-links" href="https://www.navicat.com/en/products/navicat-cloud" target="_blank">Navicat Cloud</a>. Whereas Navicat Cloud offers a central space for your team to store Navicat objects, Navicat On-Prem Server is an on-premise solution for hosting a cloud environment where you can securely store Navicat objects internally at your location.</p> <p>Now, Navicat has just released Navicat On-Prem Server 3.0. It includes a few new features that promise to help foster even greater collaboration within your team. There are four main features as follows:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">  <li>Added support for PostgreSQL and Fujitsu Enterprise Postgres connections.</li>  <li>The Query Editor has been enhanced to include Code Completion, Code Folding and SQL Beautify.</li>  <li>Expanded Object Filtering Features.</li>  <li>An improved "New Connection" Dialog.</li></ul>    <p>In today's blog we'll be reviewing Navicat On-Prem Server 3.0 and evaluating how the above features help manage your MySQL, MariaDB, and PostgreSQL instances more effectively than ever before.</p><h1 class="blog-sub-title">Upgrading to Version 3.0</h1><p>Moving up to Navicat On-Prem Server 3.0 is a fairly simple process:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>To begin, head on over to the <a class="default-links" href="https://www.navicat.com/en/download/navicat-on-prem-server" target="_blank">download page</a> and locate the installation file that matches your operating system. </li><li>Before running the installation program, be sure to uninstall the previous version of Navicat On-Prem Server.</li><li>Now you're ready to execute the installation program.</li><li>Once the installation completes, a browser with the initial setup page should open.  If it doesn't, you can right-click the app icon in right bottom corner to start the app.</li><li>Navicat On-Prem Server will remember your login credentials as well as all other configuration details, so you should be able to login to the app exactly as you did in previous versions.</li><li>Upon logging in for the first time, you'll be greeted by the Welcome splash screen. It highlights all of the new features in version 3.0.<p><img alt="welcome_screen (56K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/welcome_screen.jpg" height="653" width="757" /></p></li></ul><p>Now let's turn our attention to the new features described above.</p><h1 class="blog-sub-title">PostgreSQL and Fujitsu Enterprise Postgres Connections</h1><p>If we click the "+New" button at the top of the main app screen, a context menu will appear containing two items: "New Project" and "New Connection". Clicking on the latter will open the New Connection Dialog. Looking at the Connection Filter we can see that there are many new connection types, the most notable being PostgreSQL. Beyond that, the Vendor Filter has also been expanded to include Fujitsu, HighGo, Kingbase, and more:</p><img alt="connection_comparison (53K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/connection_comparison.jpg" height="521" width="345" /><h1 class="blog-sub-title">Enhanced Query Editor</h1><p>The Query Editor has been completely revamped to include some of the most sought-after features from the Navicat database administration and development tools. These include: </p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Beautify SQL: Automatically reformat messy or compressed SQL code into clean, properly indented, and readable format with consistent spacing and line breaks with a click of a button! The resulting code will be structured in such a way that follows standard formatting conventions, making it easier to understand and maintain.</li><li>Pin: Preserves critical data snapshots alongside their queries and timing information for ongoing analysis.</li></ul><p>All of the essential features that you'd expect are also there, including syntax highlighting, code folding, code completion, and Query Explain.</p><img alt="query_editor (97K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/query_editor.jpg" height="721" width="634" /><p>The synergistic effect of all these features will be enhanced development productivity and clearer SQL code.</p><h1 class="blog-sub-title">Expanded Object Filtering Features</h1><p>Thanks to the expanded Object Filtering features, you can now efficiently search through large volumes of data to locate specific objects instantly, so important information is always within reach.</p><img alt="object_filter (35K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/object_filter.png" height="450" width="747" /><h1 class="blog-sub-title">Improved "New Connection" Dialog</h1><p>You'll notice that there are two new icons on the New Connection dialog to the left of the Search box: they activate Grid View and List View respectively. Previously, only Grid View was supported. Here's a comparison of each view type:</p><figure>  <figcaption>New Connection Dialog - Grid View</figcaption>  <img alt="new_connection_dialog (93K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/new_connection_dialog.jpg" height="725" width="752" /></figure><figure>  <figcaption>New Connection Dialog - List View</figcaption>  <img alt="new_connection_dialog_list_view (64K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/new_connection_dialog_list_view.jpg" height="639" width="733" /></figure><h1 class="blog-sub-title">Conclusion</h1><p>Navicat On-Prem Server 3.0 is the perfect solution for organizations who wish to benefit from the convenience and features of a cloud-based solution, all while maintaining full control over their data.  Navicat On-Prem Server 3.0 allows teams to synchronize all their connection settings, queries, aggregation pipelines, snippets, model workspaces, BI workspaces and virtual group information across all their devices. Beyond sharing data, Navicat On-Prem Server 3.0 offers all the tools that professionals need for administering, monitoring, and managing MySQL, MariaDB, PostgreSQL, and Fujitsu Enterprise Postgres (FEP) databases.</p><p>Navicat On-Prem Server 3.0 is available for the Windows, macOS, and Linux operating systems. You'll find a 14-day trial version for each of these platforms on the <a class="default-links" href="https://www.navicat.com/en/download/navicat-on-prem-server" target="_blank">download page</a>.</p></body></html>]]></description>
</item>
<item>
<title>Introducing Navicat On-Prem Server 3.0</title>
<link>https://www.navicat.com/company/aboutus/blog/3432-introducing-navicat-on-prem-server-3-1.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Introducing Navicat On-Prem Server 3.0</title></head><body><b>Sep 19, 2025</b> by Robert Gravelle<br/><br/><p>On Oct 28, 2024, the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/2803-seamless-mysql-and-mariadb-management-with-navicat-on-prem-server" target="_blank">Seamless MySQL and MariaDB Management with Navicat On-Prem Server</a> blog installment introduced an on-premise solution that allows distributed teams to collaborate in real time, share data, coordinate tasks, and communicate seamlessly through a centralized platform. It's one of two Navicat offerings for Collaboration - the other being <a class="default-links" href="https://www.navicat.com/en/products/navicat-cloud" target="_blank">Navicat Cloud</a>. Whereas Navicat Cloud offers a central space for your team to store Navicat objects, Navicat On-Prem Server is an on-premise solution for hosting a cloud environment where you can securely store Navicat objects internally at your location.</p> <p>Now, Navicat has just released Navicat On-Prem Server 3.0. It includes a few new features that promise to help foster even greater collaboration within your team. There are four main features as follows:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">  <li>Added support for PostgreSQL and Fujitsu Enterprise Postgres connections.</li>  <li>The Query Editor has been enhanced to include Code Completion, Code Folding and SQL Beautify.</li>  <li>Expanded Object Filtering Features.</li>  <li>An improved "New Connection" Dialog.</li></ul>    <p>In today's blog we'll be reviewing Navicat On-Prem Server 3.0 and evaluating how the above features help manage your MySQL, MariaDB, and PostgreSQL instances more effectively than ever before.</p><h1 class="blog-sub-title">Upgrading to Version 3.0</h1><p>Moving up to Navicat On-Prem Server 3.0 is a fairly simple process:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>To begin, head on over to the <a class="default-links" href="https://www.navicat.com/en/download/navicat-on-prem-server" target="_blank">download page</a> and locate the installation file that matches your operating system. </li><li>Before running the installation program, be sure to uninstall the previous version of Navicat On-Prem Server.</li><li>Now you're ready to execute the installation program.</li><li>Once the installation completes, a browser with the initial setup page should open.  If it doesn't, you can right-click the app icon in right bottom corner to start the app.</li><li>Navicat On-Prem Server will remember your login credentials as well as all other configuration details, so you should be able to login to the app exactly as you did in previous versions.</li><li>Upon logging in for the first time, you'll be greeted by the Welcome splash screen. It highlights all of the new features in version 3.0.<p><img alt="welcome_screen (56K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/welcome_screen.jpg" height="653" width="757" /></p></li></ul><p>Now let's turn our attention to the new features described above.</p><h1 class="blog-sub-title">PostgreSQL and Fujitsu Enterprise Postgres Connections</h1><p>If we click the "+New" button at the top of the main app screen, a context menu will appear containing two items: "New Project" and "New Connection". Clicking on the latter will open the New Connection Dialog. Looking at the Connection Filter we can see that there are many new connection types, the most notable being PostgreSQL. Beyond that, the Vendor Filter has also been expanded to include Fujitsu, HighGo, Kingbase, and more:</p><img alt="connection_comparison (53K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/connection_comparison.jpg" height="521" width="345" /><h1 class="blog-sub-title">Enhanced Query Editor</h1><p>The Query Editor has been completely revamped to include some of the most sought-after features from the Navicat database administration and development tools. These include: </p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Beautify SQL: Automatically reformat messy or compressed SQL code into clean, properly indented, and readable format with consistent spacing and line breaks with a click of a button! The resulting code will be structured in such a way that follows standard formatting conventions, making it easier to understand and maintain.</li><li>Pin: Preserves critical data snapshots alongside their queries and timing information for ongoing analysis.</li></ul><p>All of the essential features that you'd expect are also there, including syntax highlighting, code folding, code completion, and Query Explain.</p><img alt="query_editor (97K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/query_editor.jpg" height="721" width="634" /><p>The synergistic effect of all these features will be enhanced development productivity and clearer SQL code.</p><h1 class="blog-sub-title">Expanded Object Filtering Features</h1><p>Thanks to the expanded Object Filtering features, you can now efficiently search through large volumes of data to locate specific objects instantly, so important information is always within reach.</p><img alt="object_filter (35K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/object_filter.png" height="450" width="747" /><h1 class="blog-sub-title">Improved "New Connection" Dialog</h1><p>You'll notice that there are two new icons on the New Connection dialog to the left of the Search box: they activate Grid View and List View respectively. Previously, only Grid View was supported. Here's a comparison of each view type:</p><figure>  <figcaption>New Connection Dialog - Grid View</figcaption>  <img alt="new_connection_dialog (93K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/new_connection_dialog.jpg" height="725" width="752" /></figure><figure>  <figcaption>New Connection Dialog - List View</figcaption>  <img alt="new_connection_dialog_list_view (64K)" src="https://www.navicat.com/link/Blog/Image/2025/20250919/new_connection_dialog_list_view.jpg" height="639" width="733" /></figure><h1 class="blog-sub-title">Conclusion</h1><p>Navicat On-Prem Server 3.0 is the perfect solution for organizations who wish to benefit from the convenience and features of a cloud-based solution, all while maintaining full control over their data.  Navicat On-Prem Server 3.0 allows teams to synchronize all their connection settings, queries, aggregation pipelines, snippets, model workspaces, BI workspaces and virtual group information across all their devices. Beyond sharing data, Navicat On-Prem Server 3.0 offers all the tools that professionals need for administering, monitoring, and managing MySQL, MariaDB, PostgreSQL, and Fujitsu Enterprise Postgres (FEP) databases.</p><p>Navicat On-Prem Server 3.0 is available for the Windows, macOS, and Linux operating systems. You'll find a 14-day trial version for each of these platforms on the <a class="default-links" href="https://www.navicat.com/en/download/navicat-on-prem-server" target="_blank">download page</a>.</p></body></html>]]></description>
</item>
<item>
<title>How Memory-First Databases are Reshaping Enterprise Storage</title>
<link>https://www.navicat.com/company/aboutus/blog/3426-how-memory-first-databases-are-reshaping-enterprise-storage.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>How Memory-First Databases are Reshaping Enterprise Storage</title></head><body><b>Sep 12, 2025</b> by Robert Gravelle<br/><br/><p>The database world is experiencing a memory-first revolution that's fundamentally changing how we approach data storage and processing. This transformation is happening from two directions simultaneously: traditional disk-based databases like PostgreSQL and MySQL are incorporating sophisticated in-memory capabilities, while pure in-memory systems like Redis are adding robust persistent storage features. The result is a new generation of hybrid databases that eliminate the age-old tradeoff between speed and reliability. This article explores how this revolution is reshaping the database landscape, from the driving forces behind the change to how to manage memory-first databases.</p><h1 class="blog-sub-title">Why In-Memory Computing Matters</h1><p>To appreciate this revolution, we need to understand why in-memory computing has become so crucial in modern data management. Traditional databases store data on disk, which requires time-consuming read and write operations every time you access information. Think of it like having to walk to a filing cabinet across the room every time you need a document, versus having all your important papers right on your desk.</p><p>In-memory computing keeps data in RAM, where it can be accessed thousands of times faster than disk storage. This dramatic speed improvement has made in-memory systems essential for applications requiring real-time analytics, high-frequency trading, gaming leaderboards, and session management. However, pure in-memory systems traditionally faced a critical limitation: data volatility. When power goes out or systems restart, everything stored only in memory disappears. Organizations have developed several strategies to mitigate this volatility risk while preserving the speed advantages of in-memory systems:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>Redundant in-memory clusters where data is replicated across multiple servers, ensuring that if one machine fails, the data remains available on other nodes.</li>    <li>Periodic snapshots that capture the entire memory state to disk at regular intervals, much like taking photographs of your desk at the end of each day so you can restore it if everything gets scattered.</li>    <li>Write-ahead logging, which records every data change to persistent storage before applying it to memory, creating a complete audit trail that can rebuild the memory state even after unexpected failures.</li></ul><h1 class="blog-sub-title">Adding Memory-First Capabilities to Traditional Database</h1><p>Traditional databases like PostgreSQL, MySQL, and Oracle have recognized that modern applications demand faster response times than disk-based storage can provide. Rather than abandoning their proven architectures, these systems are integrating sophisticated in-memory layers that work seamlessly with their existing persistent storage.</p><p>Consider how PostgreSQL has evolved to include advanced caching mechanisms and in-memory table spaces. These features allow frequently accessed data to remain in memory while maintaining the database's ACID properties and durability guarantees. Similarly, MySQL's integration with memory engines and Oracle's in-memory column store demonstrate how traditional databases are adapting to meet performance demands without sacrificing their core strengths.</p><p>This evolution allows organizations to gradually adopt in-memory capabilities without completely overhauling their existing database infrastructure. They can identify performance-critical tables or queries and selectively apply in-memory optimizations while keeping the rest of their data in traditional storage. This hybrid approach provides a practical migration path that balances performance gains with operational stability.</p><h1 class="blog-sub-title">Pure In-Memory Systems: Embracing Persistence</h1><p>Meanwhile, pure in-memory systems like Redis, Memcached, and Apache Ignite are adding sophisticated persistence mechanisms. Redis, originally designed as a simple key-value store that lived entirely in memory, now offers multiple persistence options including point-in-time snapshots and append-only file logging.</p><p>These persistence features address the primary concern organizations have had with in-memory systems: data durability. Redis's RDB snapshots create periodic backups of the entire dataset, while AOF (Append Only File) logging records every write operation, allowing for complete data recovery even after system failures. These enhancements have transformed Redis from a simple caching solution into a full-featured database capable of serving as a primary data store for many applications.</p><p>The addition of persistence doesn't compromise the speed advantages of in-memory systems. Instead, it provides configurable durability options that let organizations choose the right balance between performance and data safety for their specific use cases. Applications can operate at memory speed while having confidence that their data will survive system restarts and failures.</p><h1 class="blog-sub-title">In-Memory Database Management with Navicat</h1><p>As databases evolve to support both in-memory and persistent storage capabilities, database administrators and developers need tools that can effectively manage these hybrid systems. <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a> provides comprehensive support for working with databases that embody this memory-first philosophy, offering a unified interface for managing both traditional and modern database architectures.</p><p>Navicat's support for Redis allows developers to work with in-memory data structures while configuring persistence settings, monitoring memory usage, and managing data expiration policies. The tool provides visual interfaces for understanding how data flows between memory and disk, making it easier to optimize performance while ensuring data durability. For traditional databases with in-memory capabilities, Navicat offers tools to monitor cache hit rates, configure memory allocation, and identify opportunities for in-memory optimization.</p><h1 class="blog-sub-title">Conclusion</h1><p>The memory-first database revolution represents a maturation of database technology that addresses the real-world needs of modern applications. Organizations no longer need to choose between speed and durability, or between familiar traditional databases and cutting-edge in-memory systems. This transformation is creating more flexible, efficient, and capable data management solutions that can adapt to diverse application requirements while reducing operational complexity. As this revolution continues, we can expect to see even more sophisticated hybrid systems that blur the lines between different database categories, ultimately providing better tools for managing the ever-growing demands of data-driven applications.</p></body></html>]]></description>
</item>
<item>
<title>Going Beyond Basic Monitoring with Modern Database Observability Platforms</title>
<link>https://www.navicat.com/company/aboutus/blog/3414-going-beyond-basic-monitoring-with-modern-database-observability-platforms.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Going Beyond Basic Monitoring with Modern Database Observability Platforms</title></head><body><b>Aug 29, 2025</b> by Robert Gravelle<br/><br/><p>Database observability represents a totally new way for organizations to monitor and understand their data infrastructure. Unlike traditional monitoring that focuses on basic metrics like CPU usage and memory consumption, observability platforms provide deep, contextual insights into database behavior, enabling teams to understand not just what is happening, but why it's happening and how to optimize performance proactively. Today's blog explores the evolution from basic database monitoring to advanced observability, examining leading platforms, built-in database features, and practical implementation strategies for modern data environments.</p><h1 class="blog-sub-title">Database Observability vs. Database Monitoring</h1><p>Database observability extends beyond simple monitoring by incorporating three key pillars: metrics, logs, and traces. Think of it as the difference between checking your car's dashboard warning lights versus having a comprehensive diagnostic system that shows you engine performance, fuel efficiency patterns, and predictive maintenance needs. Observability platforms collect granular data about query execution plans, lock contention, index usage, and connection patterns, then correlate this information to provide actionable insights.</p><p>This approach becomes particularly valuable in modern distributed architectures where databases often span multiple environments and interact with numerous applications. Traditional monitoring might tell you that response times are slow, but observability platforms can pinpoint the specific query causing bottlenecks, identify which indexes are underutilized, and even suggest optimization strategies based on historical patterns.</p><h1 class="blog-sub-title">Leading Database Observability Platforms</h1><p>Several specialized platforms have emerged to address the growing complexity of database performance management. <strong>Datadog's Database Monitoring</strong> provides comprehensive visibility across multiple database engines, offering features like query-level performance tracking, execution plan analysis, and automated anomaly detection. The platform excels at correlating database performance with application metrics, helping teams understand the full impact of database issues on user experience.</p><p><strong>SolarWinds Database Performance Analyzer</strong> takes a different approach, focusing on wait time analysis to identify performance bottlenecks. By examining what queries are waiting for and why, it helps database administrators understand resource contention and optimize accordingly. The platform's strength lies in its ability to provide historical context, allowing teams to identify performance trends and capacity planning needs.</p><p><strong>Percona Monitoring and Management</strong> represents the open-source approach to database observability, offering deep insights into MySQL, PostgreSQL, and MongoDB environments. Its strength lies in providing detailed query analytics and performance schema integration, making it particularly valuable for organizations with complex, high-traffic database environments.</p><h1 class="blog-sub-title">Traditional Databases Embracing Observability</h1><p>Recognizing the critical importance of observability, traditional database vendors have integrated sophisticated monitoring capabilities directly into their platforms. <strong>Oracle's Autonomous Database</strong> includes built-in machine learning algorithms that continuously monitor performance patterns and automatically optimize configurations. This self-tuning capability represents a significant evolution from reactive monitoring to proactive performance management.</p><p><strong>Microsoft SQL Server</strong>'s Query Store functionality exemplifies how traditional databases are incorporating observability principles. By automatically capturing query execution statistics and maintaining historical performance data, SQL Server enables administrators to identify performance regressions and understand the impact of schema changes over time. The platform's integration with <strong>Azure Monitor</strong> further extends these capabilities into cloud environments.</p><p><strong>PostgreSQL</strong> has enhanced its observability through extensions like pg_stat_statements and pg_stat_activity, which provide detailed insights into query performance and system activity. These built-in tools, combined with third-party solutions, create a comprehensive observability ecosystem that rivals dedicated monitoring platforms.</p><h1 class="blog-sub-title">Navicat Monitor: Providing Comprehensive Database Insights</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a> exemplifies the evolution of database observability tools by providing deep insights into database behavior, query performance, and resource utilization across multiple database types. The platform's strength lies in its ability to monitor heterogeneous database environments from a single interface, supporting MySQL, MariaDB, PostgreSQL, SQL Server, as well as popular cloud services.</p><p>The platform's real-time monitoring capabilities extend beyond basic performance metrics to include detailed query analysis, connection monitoring, and resource utilization tracking. Navicat Monitor's alerting system enables proactive issue resolution by notifying administrators of performance anomalies before they impact end users. Its historical reporting features provide valuable insights for capacity planning and performance trend analysis, making it an essential tool for organizations managing complex database infrastructures.</p><h1 class="blog-sub-title">Conclusion</h1><p>Database observability platforms represent a critical evolution in database management, transforming reactive monitoring into proactive performance optimization. As organizations continue to rely on increasingly complex data architectures, these platforms provide the visibility and insights necessary to maintain optimal performance while ensuring reliable data access. The integration of observability features into traditional database platforms, combined with specialized monitoring solutions, creates a comprehensive foundation that empowers database administrators to deliver exceptional performance and reliability.</p></body></html>]]></description>
</item>
<item>
<title>Privacy-Preserving Databases: Protecting Data While Enabling Access</title>
<link>https://www.navicat.com/company/aboutus/blog/3405-privacy-preserving-databases-protecting-data-while-enabling-access.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Privacy-Preserving Databases: Protecting Data While Enabling Access</title></head><body> <b>Aug 19, 2025</b> by Robert Gravelle<br/><br/><p>In an era where data breaches make headlines weekly and privacy regulations like GDPR (General Data Protection Regulation) reshape how organizations handle personal information, privacy-preserving databases have emerged as a critical technology. These specialized database systems allow organizations to store, query, and analyze sensitive data while maintaining strict privacy protections for individuals whose information is contained within. This article explores the core technologies that make privacy protection possible, examines leading database solutions in this space, and discusses how both traditional database vendors and modern administration tools are adapting to support these privacy-first approaches.</p><h1 class="blog-sub-title">Core Technologies Behind Privacy Protection</h1><p>Privacy-preserving databases incorporate several key features that distinguish them from traditional database systems. Think of these features as multiple layers of protection, each serving a specific purpose in safeguarding sensitive information.</p><p>The foundation of these systems rests on <strong>differential privacy</strong>, a mathematical framework that adds carefully calibrated noise to query results. This approach ensures that whether any individual's data is included in the database or not, the statistical outputs remain virtually indistinguishable. Imagine trying to determine if a specific person attended a large concert by looking at aggregate attendance statisticsdifferential privacy makes this type of inference nearly impossible.</p><p><strong>Homomorphic encryption</strong> represents another cornerstone feature, allowing computations to be performed directly on encrypted data without ever decrypting it. This means database queries can execute and return meaningful results while the underlying sensitive data remains encrypted throughout the entire process. It's analogous to performing mathematical operations inside a locked box without ever opening it.</p><p><strong>Secure multi-party computation</strong> enables multiple parties to jointly compute functions over their combined data without revealing their individual inputs to each other. For instance, multiple hospitals could collaborate on medical research by combining their patient data for analysis without any hospital seeing another's specific patient records.</p><p><strong>Zero-knowledge proofs</strong> allow database systems to verify the truth of statements about data without revealing the underlying information itself. These proofs can confirm that certain conditions are met or that specific computations were performed correctly without exposing the sensitive data involved.</p><h1 class="blog-sub-title">Some Examples of Leading Privacy-Preserving Databases </h1><p>Several innovative database systems have emerged to address these privacy challenges. <strong>CryptDB</strong> pioneered the field by enabling SQL queries over encrypted data, using multiple encryption schemes to support different types of database operations while maintaining security.</p><p><strong>Opaque</strong> takes a different approach by combining hardware-based trusted execution environments with differential privacy. This system runs database queries inside secure enclaves that isolate computation from the underlying operating system and hardware, providing both confidentiality and integrity guarantees.</p><p><strong>PrivateSQL</strong> focuses specifically on supporting complex analytical queries while preserving privacy through advanced cryptographic techniques. The system demonstrates how organizations can perform sophisticated data analysis without compromising individual privacy.</p><p><strong>Microsoft's SEAL</strong> (Simple Encrypted Arithmetic Library) provides the cryptographic foundation for many privacy-preserving database implementations, offering homomorphic encryption capabilities that enable computation on encrypted data.</p><h1 class="blog-sub-title">Traditional Databases Embracing Privacy Features</h1><p>Established database vendors have recognized the growing demand for privacy protection and are integrating these capabilities into their existing platforms. This evolution represents a significant shift in how traditional database systems approach data protection.</p><p>PostgreSQL has incorporated extensions for differential privacy through projects like <strong>PostgreSQL Anonymizer</strong>, which provides tools for data masking and anonymization directly within the database engine. These features allow organizations to create privacy-safe versions of their datasets for testing and development purposes.</p><p><strong>Oracle Database</strong> has introduced comprehensive data redaction and masking capabilities that can dynamically alter sensitive data presentation based on user privileges and context. The system can automatically detect and protect sensitive data types like credit card numbers and social security numbers.</p><p><strong>Microsoft SQL Server</strong> has integrated <strong>Always Encrypted</strong> technology, which ensures that sensitive data remains encrypted at rest, in transit, and even during query processing. The database engine never sees the plaintext data, yet can still perform certain types of queries and operations.</p><p><strong>Amazon's Aurora</strong> and other cloud database services now offer client-side encryption and key management services that enable organizations to maintain control over their encryption keys while leveraging cloud database capabilities.</p><h1 class="blog-sub-title">Navicat's Role in Privacy-Preserving Database Management</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>'s comprehensive database administration and development tools have evolved to support the unique requirements of privacy-preserving database environments. These tools recognize that managing encrypted or privacy-protected data requires specialized capabilities beyond traditional database administration.</p><p>The platform provides secure connection management that supports advanced encryption protocols and authentication mechanisms required by privacy-preserving systems. Database administrators can establish connections to encrypted databases while maintaining the security protocols that these systems demand.</p><p>Navicat's query development environment includes features for working with encrypted data and privacy-preserving query patterns. The tools help developers understand how their queries will interact with privacy protection mechanisms, enabling them to write more efficient and privacy-compliant database operations.</p><h1 class="blog-sub-title">Conclusion</h1><p>Privacy-preserving databases represent a fundamental shift in how we approach data management in an increasingly privacy-conscious world. By incorporating advanced cryptographic techniques and privacy-preserving algorithms, these systems enable organizations to derive value from sensitive data while maintaining robust protection for individual privacy. As traditional database vendors continue to integrate these capabilities and specialized tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> evolve to support them, privacy-preserving databases are becoming more accessible and practical for mainstream adoption. The future of data management lies not in choosing between utility and privacy, but in systems that provide both simultaneously through innovative technological approaches.</p></body></html>]]></description>
</item>
<item>
<title>Privacy-Preserving Databases: Protecting Data While Enabling Access</title>
<link>https://www.navicat.com/company/aboutus/blog/3406-privacy-preserving-databases-protecting-data-while-enabling-access-2.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Privacy-Preserving Databases: Protecting Data While Enabling Access</title></head><body> <b>Aug 19, 2025</b> by Robert Gravelle<br/><br/><p>In an era where data breaches make headlines weekly and privacy regulations like GDPR (General Data Protection Regulation) reshape how organizations handle personal information, privacy-preserving databases have emerged as a critical technology. These specialized database systems allow organizations to store, query, and analyze sensitive data while maintaining strict privacy protections for individuals whose information is contained within. This article explores the core technologies that make privacy protection possible, examines leading database solutions in this space, and discusses how both traditional database vendors and modern administration tools are adapting to support these privacy-first approaches.</p><h1 class="blog-sub-title">Core Technologies Behind Privacy Protection</h1><p>Privacy-preserving databases incorporate several key features that distinguish them from traditional database systems. Think of these features as multiple layers of protection, each serving a specific purpose in safeguarding sensitive information.</p><p>The foundation of these systems rests on <strong>differential privacy</strong>, a mathematical framework that adds carefully calibrated noise to query results. This approach ensures that whether any individual's data is included in the database or not, the statistical outputs remain virtually indistinguishable. Imagine trying to determine if a specific person attended a large concert by looking at aggregate attendance statisticsdifferential privacy makes this type of inference nearly impossible.</p><p><strong>Homomorphic encryption</strong> represents another cornerstone feature, allowing computations to be performed directly on encrypted data without ever decrypting it. This means database queries can execute and return meaningful results while the underlying sensitive data remains encrypted throughout the entire process. It's analogous to performing mathematical operations inside a locked box without ever opening it.</p><p><strong>Secure multi-party computation</strong> enables multiple parties to jointly compute functions over their combined data without revealing their individual inputs to each other. For instance, multiple hospitals could collaborate on medical research by combining their patient data for analysis without any hospital seeing another's specific patient records.</p><p><strong>Zero-knowledge proofs</strong> allow database systems to verify the truth of statements about data without revealing the underlying information itself. These proofs can confirm that certain conditions are met or that specific computations were performed correctly without exposing the sensitive data involved.</p><h1 class="blog-sub-title">Some Examples of Leading Privacy-Preserving Databases </h1><p>Several innovative database systems have emerged to address these privacy challenges. <strong>CryptDB</strong> pioneered the field by enabling SQL queries over encrypted data, using multiple encryption schemes to support different types of database operations while maintaining security.</p><p><strong>Opaque</strong> takes a different approach by combining hardware-based trusted execution environments with differential privacy. This system runs database queries inside secure enclaves that isolate computation from the underlying operating system and hardware, providing both confidentiality and integrity guarantees.</p><p><strong>PrivateSQL</strong> focuses specifically on supporting complex analytical queries while preserving privacy through advanced cryptographic techniques. The system demonstrates how organizations can perform sophisticated data analysis without compromising individual privacy.</p><p><strong>Microsoft's SEAL</strong> (Simple Encrypted Arithmetic Library) provides the cryptographic foundation for many privacy-preserving database implementations, offering homomorphic encryption capabilities that enable computation on encrypted data.</p><h1 class="blog-sub-title">Traditional Databases Embracing Privacy Features</h1><p>Established database vendors have recognized the growing demand for privacy protection and are integrating these capabilities into their existing platforms. This evolution represents a significant shift in how traditional database systems approach data protection.</p><p>PostgreSQL has incorporated extensions for differential privacy through projects like <strong>PostgreSQL Anonymizer</strong>, which provides tools for data masking and anonymization directly within the database engine. These features allow organizations to create privacy-safe versions of their datasets for testing and development purposes.</p><p><strong>Oracle Database</strong> has introduced comprehensive data redaction and masking capabilities that can dynamically alter sensitive data presentation based on user privileges and context. The system can automatically detect and protect sensitive data types like credit card numbers and social security numbers.</p><p><strong>Microsoft SQL Server</strong> has integrated <strong>Always Encrypted</strong> technology, which ensures that sensitive data remains encrypted at rest, in transit, and even during query processing. The database engine never sees the plaintext data, yet can still perform certain types of queries and operations.</p><p><strong>Amazon's Aurora</strong> and other cloud database services now offer client-side encryption and key management services that enable organizations to maintain control over their encryption keys while leveraging cloud database capabilities.</p><h1 class="blog-sub-title">Navicat's Role in Privacy-Preserving Database Management</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>'s comprehensive database administration and development tools have evolved to support the unique requirements of privacy-preserving database environments. These tools recognize that managing encrypted or privacy-protected data requires specialized capabilities beyond traditional database administration.</p><p>The platform provides secure connection management that supports advanced encryption protocols and authentication mechanisms required by privacy-preserving systems. Database administrators can establish connections to encrypted databases while maintaining the security protocols that these systems demand.</p><p>Navicat's query development environment includes features for working with encrypted data and privacy-preserving query patterns. The tools help developers understand how their queries will interact with privacy protection mechanisms, enabling them to write more efficient and privacy-compliant database operations.</p><h1 class="blog-sub-title">Conclusion</h1><p>Privacy-preserving databases represent a fundamental shift in how we approach data management in an increasingly privacy-conscious world. By incorporating advanced cryptographic techniques and privacy-preserving algorithms, these systems enable organizations to derive value from sensitive data while maintaining robust protection for individual privacy. As traditional database vendors continue to integrate these capabilities and specialized tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> evolve to support them, privacy-preserving databases are becoming more accessible and practical for mainstream adoption. The future of data management lies not in choosing between utility and privacy, but in systems that provide both simultaneously through innovative technological approaches.</p></body></html>]]></description>
</item>
<item>
<title>Privacy-Preserving Databases: Protecting Data While Enabling Access</title>
<link>https://www.navicat.com/company/aboutus/blog/3407-privacy-preserving-databases-protecting-data-while-enabling-access-3.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Privacy-Preserving Databases: Protecting Data While Enabling Access</title></head><body> <b>Aug 19, 2025</b> by Robert Gravelle<br/><br/><p>In an era where data breaches make headlines weekly and privacy regulations like GDPR (General Data Protection Regulation) reshape how organizations handle personal information, privacy-preserving databases have emerged as a critical technology. These specialized database systems allow organizations to store, query, and analyze sensitive data while maintaining strict privacy protections for individuals whose information is contained within. This article explores the core technologies that make privacy protection possible, examines leading database solutions in this space, and discusses how both traditional database vendors and modern administration tools are adapting to support these privacy-first approaches.</p><h1 class="blog-sub-title">Core Technologies Behind Privacy Protection</h1><p>Privacy-preserving databases incorporate several key features that distinguish them from traditional database systems. Think of these features as multiple layers of protection, each serving a specific purpose in safeguarding sensitive information.</p><p>The foundation of these systems rests on <strong>differential privacy</strong>, a mathematical framework that adds carefully calibrated noise to query results. This approach ensures that whether any individual's data is included in the database or not, the statistical outputs remain virtually indistinguishable. Imagine trying to determine if a specific person attended a large concert by looking at aggregate attendance statisticsdifferential privacy makes this type of inference nearly impossible.</p><p><strong>Homomorphic encryption</strong> represents another cornerstone feature, allowing computations to be performed directly on encrypted data without ever decrypting it. This means database queries can execute and return meaningful results while the underlying sensitive data remains encrypted throughout the entire process. It's analogous to performing mathematical operations inside a locked box without ever opening it.</p><p><strong>Secure multi-party computation</strong> enables multiple parties to jointly compute functions over their combined data without revealing their individual inputs to each other. For instance, multiple hospitals could collaborate on medical research by combining their patient data for analysis without any hospital seeing another's specific patient records.</p><p><strong>Zero-knowledge proofs</strong> allow database systems to verify the truth of statements about data without revealing the underlying information itself. These proofs can confirm that certain conditions are met or that specific computations were performed correctly without exposing the sensitive data involved.</p><h1 class="blog-sub-title">Some Examples of Leading Privacy-Preserving Databases </h1><p>Several innovative database systems have emerged to address these privacy challenges. <strong>CryptDB</strong> pioneered the field by enabling SQL queries over encrypted data, using multiple encryption schemes to support different types of database operations while maintaining security.</p><p><strong>Opaque</strong> takes a different approach by combining hardware-based trusted execution environments with differential privacy. This system runs database queries inside secure enclaves that isolate computation from the underlying operating system and hardware, providing both confidentiality and integrity guarantees.</p><p><strong>PrivateSQL</strong> focuses specifically on supporting complex analytical queries while preserving privacy through advanced cryptographic techniques. The system demonstrates how organizations can perform sophisticated data analysis without compromising individual privacy.</p><p><strong>Microsoft's SEAL</strong> (Simple Encrypted Arithmetic Library) provides the cryptographic foundation for many privacy-preserving database implementations, offering homomorphic encryption capabilities that enable computation on encrypted data.</p><h1 class="blog-sub-title">Traditional Databases Embracing Privacy Features</h1><p>Established database vendors have recognized the growing demand for privacy protection and are integrating these capabilities into their existing platforms. This evolution represents a significant shift in how traditional database systems approach data protection.</p><p>PostgreSQL has incorporated extensions for differential privacy through projects like <strong>PostgreSQL Anonymizer</strong>, which provides tools for data masking and anonymization directly within the database engine. These features allow organizations to create privacy-safe versions of their datasets for testing and development purposes.</p><p><strong>Oracle Database</strong> has introduced comprehensive data redaction and masking capabilities that can dynamically alter sensitive data presentation based on user privileges and context. The system can automatically detect and protect sensitive data types like credit card numbers and social security numbers.</p><p><strong>Microsoft SQL Server</strong> has integrated <strong>Always Encrypted</strong> technology, which ensures that sensitive data remains encrypted at rest, in transit, and even during query processing. The database engine never sees the plaintext data, yet can still perform certain types of queries and operations.</p><p><strong>Amazon's Aurora</strong> and other cloud database services now offer client-side encryption and key management services that enable organizations to maintain control over their encryption keys while leveraging cloud database capabilities.</p><h1 class="blog-sub-title">Navicat's Role in Privacy-Preserving Database Management</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>'s comprehensive database administration and development tools have evolved to support the unique requirements of privacy-preserving database environments. These tools recognize that managing encrypted or privacy-protected data requires specialized capabilities beyond traditional database administration.</p><p>The platform provides secure connection management that supports advanced encryption protocols and authentication mechanisms required by privacy-preserving systems. Database administrators can establish connections to encrypted databases while maintaining the security protocols that these systems demand.</p><p>Navicat's query development environment includes features for working with encrypted data and privacy-preserving query patterns. The tools help developers understand how their queries will interact with privacy protection mechanisms, enabling them to write more efficient and privacy-compliant database operations.</p><h1 class="blog-sub-title">Conclusion</h1><p>Privacy-preserving databases represent a fundamental shift in how we approach data management in an increasingly privacy-conscious world. By incorporating advanced cryptographic techniques and privacy-preserving algorithms, these systems enable organizations to derive value from sensitive data while maintaining robust protection for individual privacy. As traditional database vendors continue to integrate these capabilities and specialized tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> evolve to support them, privacy-preserving databases are becoming more accessible and practical for mainstream adoption. The future of data management lies not in choosing between utility and privacy, but in systems that provide both simultaneously through innovative technological approaches.</p></body></html>]]></description>
</item>
<item>
<title>A Guide to Database Sharding as a Service</title>
<link>https://www.navicat.com/company/aboutus/blog/3373-a-guide-to-database-sharding-as-a-service.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Guide to Database Sharding as a Service</title></head><body><b>Aug 8, 2025</b> by Robert Gravelle<br/><br/><p>Database sharding represents one of the most powerful techniques for scaling databases horizontally, addressing the limitations that arise when a single database server can no longer handle the growing demands of modern applications. To understand sharding, imagine a massive library that has grown so large that patrons struggle to find books quickly. Rather than building a taller building, the librarians decide to create multiple smaller libraries, each specializing in certain subjects or alphabetical ranges. This distribution approach mirrors exactly what database sharding accomplishes.</p><p>At its core, sharding involves partitioning a large database into smaller, more manageable pieces called shards, with each shard residing on a separate server or cluster. Each shard contains a subset of the total data, typically divided based on a specific criterion such as customer ID ranges, geographical regions, or alphabetical sorting. This horizontal partitioning strategy differs fundamentally from vertical scaling, where you simply add more power to a single server, because it distributes both the data storage burden and the processing load across multiple systems.</p><p>The beauty of sharding lies in its ability to maintain performance as your application grows. When a single database server reaches its limits in terms of storage capacity, memory, or processing power, sharding allows you to add more servers to handle the increased load, rather than trying to upgrade to an impossibly powerful single machine. This article explores how Database Sharding as a Service has revolutionized horizontal database scaling by providing managed solutions that automatically distribute data across multiple servers, enabling organizations to achieve high-performance scalability without the traditional complexity of building and maintaining sharding infrastructure themselves.</p><h1 class="blog-sub-title">A Quick History</h1><p>Traditionally, implementing database sharding required significant technical expertise and substantial infrastructure management overhead. Database administrators needed to design sharding strategies, manage data distribution logic, handle cross-shard queries, and maintain consistency across multiple database instances. This complexity often made sharding accessible only to organizations with substantial technical resources and expertise.</p><p>Database Sharding as a Service has emerged as a game-changing solution that abstracts away much of this complexity. These services provide managed sharding solutions where the service provider handles the intricate details of shard management, data distribution, query routing, and infrastructure maintenance. This approach allows organizations to benefit from sharding's scalability advantages without needing to build and maintain the underlying sharding infrastructure themselves.</p><p>The service model transforms sharding from a complex technical challenge into a configurable feature. Organizations can focus on their core business logic while the service provider ensures optimal data distribution, handles failover scenarios, manages shard rebalancing, and maintains overall system performance.</p><h1 class="blog-sub-title">Leading Database Sharding Services in the Market</h1><p>Several prominent cloud providers and specialized database companies now offer sophisticated sharding services:</p> <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">  <li>Amazon Web Services provides sharding capabilities through Amazon RDS with read replicas and Amazon Aurora's distributed architecture, while their DynamoDB offers automatic partitioning that essentially provides sharding functionality without requiring manual configuration.</li>  <li>Google Cloud offers sharding through Cloud Spanner, which automatically distributes data across multiple servers and regions while maintaining strong consistency guarantees. This service exemplifies how modern sharding solutions can handle complex distributed database challenges transparently.</li>    <li>MongoDB Atlas represents another significant player in this space, providing automated sharding that can dynamically redistribute data as your application's needs change. The service monitors shard utilization and can automatically split or merge shards to maintain optimal performance.</li>    <li>Microsoft Azure's Cosmos DB offers partitioning capabilities that function similarly to sharding, automatically distributing data across multiple physical partitions based on partition key strategies that developers define.</li></ul><p>These services demonstrate how the industry has evolved to provide sharding capabilities that were once available only to companies with extensive database expertise and infrastructure resources.</p><h1 class="blog-sub-title">How Navicat Simplifies Database Sharding Management</h1><p>Working with sharded databases, whether through managed services or custom implementations, presents unique challenges for database administrators and developers. <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a>'s comprehensive database administration and development tools provide essential capabilities that significantly streamline the management of sharded database environments.</p><p>Navicat's multi-database connectivity features allow administrators to establish connections to multiple shards simultaneously, providing a unified interface for managing distributed data. This capability proves invaluable when you need to execute administrative tasks across multiple database instances or when troubleshooting issues that span multiple shards.</p><p>The visual query builder and SQL editor in Navicat help developers construct and test queries that work effectively within sharded environments. Understanding how queries will perform across different shards becomes crucial for maintaining application performance, and Navicat's tools provide the visibility needed to optimize these distributed queries.</p><p>Additionally, Navicat's data synchronization and comparison tools become particularly valuable in sharded environments where maintaining data consistency and performing migrations between shards requires careful coordination. These tools help ensure that data remains properly distributed and synchronized across the sharded infrastructure.</p><h1 class="blog-sub-title">Conclusion</h1><p>Database Sharding as a Service represents a significant advancement in making horizontal database scaling accessible to organizations of all sizes. By abstracting the complexity of shard management while providing the performance benefits of distributed data storage, these services enable businesses to focus on growth rather than infrastructure challenges. As applications continue to generate ever-increasing amounts of data, understanding and leveraging these sharding services, supported by comprehensive database management tools like <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a>, becomes essential for maintaining competitive advantage in our data-driven world.</p></body></html>]]></description>
</item>
<item>
<title>Navicat 17.3 Review: AI-Powered Database Management Takes Center Stage</title>
<link>https://www.navicat.com/company/aboutus/blog/3365-navicat-17-3-review-ai-powered-database-management-takes-center-stage.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat 17.3 Review: AI-Powered Database Management Takes Center Stage</title></head><body><b>Jul 29, 2025</b> by Robert Gravelle<br/><br/><p>Database administrators and developers looking for a comprehensive database management solution will find plenty to appreciate in Navicat 17.3, the latest iteration of PremiumSoft's flagship database tool. This update represents a significant leap forward, particularly in artificial intelligence integration and database connectivity options, positioning Navicat as a forward-thinking solution in an increasingly competitive market.</p><h1 class="blog-sub-title">Expanded Database Universe</h1><p>The most immediately practical enhancement in version 17.3 is the expanded roster of supported database connections. The addition of five new connection types demonstrates Navicat's commitment to supporting diverse database ecosystems. Fujitsu Enterprise Postgres support acknowledges the growing enterprise adoption of PostgreSQL variants, while Azure Cosmos DB for MongoDB integration reflects the reality of hybrid cloud deployments that many organizations face today.</p><p>The inclusion of database systems like Dameng and KingBaseES is particularly noteworthy, suggesting Navicat's awareness of the global database landscape and the increasing importance of domestic database solutions in various markets. IvorySQL support further strengthens the PostgreSQL coverage, giving users more flexibility when working with PostgreSQL-compatible databases.</p><p>This expanded connectivity transforms Navicat from a multi-database tool into a truly universal database management platform, reducing the need for multiple specialized tools in complex environments.</p><img alt="new_connection_dialog (174K)" src="https://www.navicat.com/link/Blog/Image/2025/20250729/new_connection_dialog.jpg" height="732" width="902" /><h1 class="blog-sub-title">AI Integration: The Game Changer</h1><p>Where Navicat 17.3 truly distinguishes itself is in its comprehensive AI integration. The support for a range of AI models, including ChatGPT, Deepseek, Google Gemini, Ollama - and the newly added Grok, Claude, and Qwen - is a game changer, demonstrating Navicat's understanding that different AI assistants excel in different scenarios. This multi-model approach allows users to leverage the unique strengths of each platform rather than being locked into a single AI ecosystem.</p><p>The "Ask AI" feature with pinnable favorite actions represents thoughtful user experience design. Rather than forcing users to repeatedly access AI features through complex menus, the ability to pin frequently used AI actions creates personalized workflows that can significantly accelerate daily tasks. This is particularly valuable for SQL optimization and code quality improvements, where iterative refinement is common.</p><p>The "Compare with" functionality addresses a real pain point in AI-assisted development: the uncertainty about whether you're getting the best possible solution. By allowing direct comparison between different AI assistants' responses, users can make more informed decisions about their database queries and structures.</p><p>The "Fix with AI" and "Explain SQL" features tackle two of the most time-consuming aspects of database work. Automated error resolution suggestions can dramatically reduce debugging time, while SQL explanation capabilities serve as both a learning tool for junior developers and a documentation aid for complex queries.</p><img alt="ai_menu (57K)" src="https://www.navicat.com/link/Blog/Image/2025/20250729/ai_menu.jpg" height="240" width="711" /><h1 class="blog-sub-title">User Interface Refinements</h1><p>The user interface improvements, while more subtle than the AI enhancements, address practical daily workflow issues. Color-coded text comparison makes it significantly easier to spot differences in database schemas, reducing eye strain and potential errors during data analysis.</p><img alt="text_compare (152K)" src="https://www.navicat.com/link/Blog/Image/2025/20250729/text_compare.jpg" height="882" width="734" /><p>The improved code completion with "great-grandparent cases" support specifically addresses complex database relationships, where traditional autocomplete often falls short. The expandable Information Pane provides better workspace management, allowing users to maximize screen real estate when needed while keeping essential information accessible.</p><img alt="expand_button (24K)" src="https://www.navicat.com/link/Blog/Image/2025/20250729/expand_button.jpg" height="372" width="326" /><h1 class="blog-sub-title">The Verdict</h1><p>Navicat 17.3 represents a mature evolution of database management software, successfully balancing traditional database administration needs with cutting-edge AI capabilities. The expanded database support ensures broad compatibility, while the thoughtful AI integration provides genuine productivity improvements rather than superficial feature additions.</p><p>For database professionals working in multi-platform environments or those looking to leverage AI assistance in their daily workflows, Navicat 17.3 offers compelling value. The combination of universal database connectivity and intelligent assistance positions it as a tool that can grow with evolving database landscapes and development practices.</p><p>Navicat 17.3 is available for download today. For more information about the new features visit the <a class="default-links" href="https://www.navicat.com/en/navicat-17-highlights" target="_blank">Navicat 17 Highlights page</a>. You'll also find link to try or buy Navicat 17.3 at the bottom of the page.</p></body></html>]]></description>
</item>
<item>
<title>Building Tomorrow's Green Data Infrastructure with Sustainability-Focused Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/3363-building-tomorrow-s-green-data-infrastructure-with-sustainability-focused-databases.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Building Tomorrow's Green Data Infrastructure with Sustainability-Focused Databases</title></head><body>  <b>Jul 22, 2025</b> by Robert Gravelle<br/><br/><p>As organizations worldwide grapple with mounting environmental challenges, the technology sector faces increasing pressure to reduce its carbon footprint. Data centers alone consume approximately 1% of global electricity, making database efficiency a critical component of corporate sustainability strategies. Sustainability-focused databases represent a paradigm shift from traditional performance-only metrics to encompass environmental impact, energy efficiency, and resource optimization alongside conventional database capabilities.</p><p>This article explores how sustainability-focused databases represent a fundamental shift in data management philosophy, balancing traditional performance metrics with environmental considerations like energy efficiency and resource optimization to help organizations reduce their carbon footprint while maintaining reliable data operations. The key insight here is that this isn't simply about making existing databases use less power - it's about rethinking the entire approach to database design from the ground up. Just as hybrid cars required engineers to reconsider the fundamental relationship between power and efficiency, sustainable databases require us to view computational performance through an environmental lens, creating systems that are both effective and environmentally responsible. </p><h1 class="blog-sub-title">The Green Revolution in Data Architecture</h1><p>Sustainability-focused databases prioritize environmental considerations throughout their architecture and operation. Unlike conventional databases that optimize primarily for speed and reliability, these systems balance performance with energy consumption, hardware longevity, and resource utilization. The core principle involves minimizing computational overhead while maintaining data integrity and accessibility.</p><p>Think of this approach like designing a hybrid car versus a traditional vehicle. While both need to transport passengers efficiently, the hybrid considers fuel consumption and emissions as equally important design constraints. Similarly, sustainable databases weigh energy costs against query performance, seeking optimal efficiency rather than maximum speed at any environmental cost.</p><p>These databases typically incorporate several key features: intelligent query optimization that reduces processing cycles, compression algorithms that minimize storage requirements, and adaptive scaling that adjusts resource allocation based on actual demand rather than peak capacity planning.</p><h1 class="blog-sub-title">Environmental Impact and Energy Efficiency</h1><p>The environmental implications of database operations extend far beyond immediate electricity consumption. Traditional databases often operate with significant overhead, maintaining multiple redundant processes and keeping servers at constant high-performance states regardless of actual workload demands.</p><p>Sustainability-focused systems address this through dynamic resource management. When query loads decrease during off-peak hours, these databases can scale down processing power, reduce memory allocation, and even power down unnecessary hardware components. This approach parallels how modern buildings use smart lighting systems that automatically adjust brightness based on occupancy and natural light levels.</p><p>Furthermore, these databases optimize data storage through advanced compression techniques and intelligent archiving strategies. By reducing the physical storage footprint, organizations decrease their need for additional hardware, thereby reducing manufacturing-related emissions and extending the operational lifespan of existing infrastructure.</p><h1 class="blog-sub-title">Implementation Strategies and Best Practices</h1><p>Implementing sustainability-focused databases requires a comprehensive approach that considers both technical architecture and operational procedures. Organizations should begin by conducting energy audits of their existing database infrastructure to establish baseline consumption metrics and identify optimization opportunities.</p><p>The implementation process typically involves migrating to database systems that support dynamic scaling, implementing intelligent caching mechanisms to reduce redundant queries, and establishing data lifecycle management policies that automatically archive or compress older information. Companies should also consider geographic factors, such as locating data centers in regions with abundant renewable energy sources.</p><p>Successful implementation also requires staff training and cultural adaptation. Database administrators need to understand how traditional performance tuning techniques may conflict with sustainability goals, and development teams must learn to write queries that balance speed with resource efficiency. This educational component often proves as crucial as the technical migration itself.</p><h1 class="blog-sub-title">Database Management Tools and Administration</h1><p>Professional database administration tools play a vital role in successfully managing sustainability-focused databases. <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a>'s comprehensive database administration and development platform provides essential capabilities for organizations transitioning to environmentally conscious data management. The suite offers advanced monitoring features that track both performance metrics and resource utilization patterns, enabling administrators to identify optimization opportunities and measure environmental impact improvements.</p><p>Navicat's tools facilitate efficient database design through visual modeling capabilities that help developers create optimized schemas from the outset. The platform's query optimization features assist in writing efficient SQL statements that minimize processing overhead, while its automated backup and maintenance scheduling reduces the need for energy-intensive manual interventions during peak usage periods.</p><h1 class="blog-sub-title">Conclusion</h1><p>Sustainability-focused databases represent more than an environmental initiative; they embody a fundamental rethinking of how we approach data management in a resource-constrained world. As regulatory pressures increase and stakeholder expectations evolve, organizations that proactively adopt sustainable database practices will find themselves better positioned for long-term success. The integration of environmental considerations into database design creates opportunities for cost reduction, operational efficiency, and competitive differentiation while contributing to broader climate goals. The transition requires careful planning and the right tools, but the benefits extend well beyond immediate environmental impact to encompass improved resource utilization and often enhanced system reliability.</p></body></html>]]></description>
</item>
<item>
<title>Quantum-Resistant Encryption in Modern Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/3354-quantum-resistant-encryption-in-modern-databases.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Quantum-Resistant Encryption in Modern Databases</title></head><body><b>Jul 9, 2025</b> by Robert Gravelle<br/><br/><p>The advent of quantum computing poses an unprecedented threat to traditional encryption methods that have secured our digital infrastructure for decades. Current cryptographic systems, including RSA, elliptic curve cryptography (ECC), and Diffie-Hellman key exchange, rely on mathematical problems that are computationally difficult for classical computers to solve. However, quantum computers running Shor's algorithm could theoretically break these encryption schemes in record time, rendering them virtually useless. </p>  <p>This threat isn't merely theoretical. Major technology companies and governments are investing billions in quantum computing research, with IBM, Google, and others achieving significant quantum milestones. While large-scale, fault-tolerant quantum computers capable of breaking current encryption may still be years away, the "Y2Q" (Years to Quantum) countdown has already begun. Organizations must prepare now, as encrypted data stolen today could be decrypted once quantum computers maturea concept known as "harvest now, decrypt later" attacks. This article explains how quantum computing threatens current encryption methods and how modern databases are implementing quantum-resistant encryption algorithms to protect data from future quantum computer attacks.</p><h1 class="blog-sub-title">Guarding Agaist the Quantum Threat with Quantum-Resistant Encryption</h1><p>Quantum-resistant encryption, also called post-quantum cryptography (PQC), represents a new class of cryptographic algorithms designed to withstand attacks from both classical and quantum computers. Unlike current methods based on integer factorization or discrete logarithms, quantum-resistant algorithms rely on mathematical problems that remain difficult even for quantum computers.</p><p>The National Institute of Standards and Technology (NIST) has been leading the standardization effort, selecting several algorithms after rigorous evaluation. Key approaches include lattice-based cryptography (CRYSTALS-Kyber for key encapsulation, CRYSTALS-Dilithium for digital signatures), hash-based signatures (SPHINCS+), and code-based cryptography. These algorithms offer varying trade-offs between security, performance, and key sizes, allowing organizations to choose appropriate solutions for their specific needs.</p><h1 class="blog-sub-title">Modern Database Support for Quantum-Resistant Encryption</h1><p>Database vendors are proactively implementing quantum-resistant encryption to protect sensitive data. IBM DB2 has integrated CRYSTALS-Kyber and CRYSTALS-Dilithium algorithms, providing quantum-safe key exchange and digital signatures. Oracle Database has added post-quantum cryptography support in recent versions, focusing on protecting data at rest and in transit.</p><p>Microsoft SQL Server now supports NIST-approved quantum-safe algorithms, while PostgreSQL offers extensions for post-quantum encryption capabilities. Cloud database providers are also advancing quantum readinessAmazon RDS and Aurora participate in AWS's quantum-safe cryptography initiatives, Google Cloud SQL supports post-quantum TLS protocols, and Azure SQL Database implements Microsoft's quantum-resistant solutions.</p><p>Specialized databases like CockroachDB have built-in quantum-resistant algorithm support, while MongoDB Atlas and Apple's FoundationDB offer post-quantum encryption options. These implementations typically focus on three critical areas: encrypting data at rest, securing data in transit through quantum-safe TLS, and protecting authentication processes with quantum-resistant digital signatures.</p><h1 class="blog-sub-title">Navicat: Secure Database Administration in the Quantum Era</h1><p>As organizations transition to quantum-resistant encryption, reliable database administration tools become crucial for managing security implementations effectively. <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a>'s comprehensive database administration and development tools provide essential capabilities for working securely with modern databases. The platform supports secure connections across multiple database systems, enabling administrators to manage encrypted databases with confidence.</p><p>Navicat's tools facilitate secure database connections through advanced encryption protocols, helping database professionals implement and maintain security best practices. The platform's intuitive interface allows administrators to configure security settings, monitor database access, and ensure compliance with evolving cryptographic standards without compromising productivity or functionality.</p><h1 class="blog-sub-title">Conclusion</h1><p>The transition to quantum-resistant encryption represents one of the most significant security upgrades in computing history. As quantum computing advances, organizations cannot afford to waitthe time for preparation is now. Modern database systems are already implementing post-quantum cryptography, providing the foundation for long-term data security.</p><p>Success in this transition requires not only adopting quantum-resistant algorithms but also utilizing professional-grade database administration tools that support secure implementation and management. By combining quantum-safe encryption with robust database management practices, organizations can build resilient data infrastructure ready for the quantum future.</p></body></html>]]></description>
</item>
<item>
<title>Quantum-Resistant Encryption in Modern Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/3355-quantum-resistant-encryption-in-modern-databases-2.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Quantum-Resistant Encryption in Modern Databases</title></head><body><b>Jul 9, 2025</b> by Robert Gravelle<br/><br/><p>The advent of quantum computing poses an unprecedented threat to traditional encryption methods that have secured our digital infrastructure for decades. Current cryptographic systems, including RSA, elliptic curve cryptography (ECC), and Diffie-Hellman key exchange, rely on mathematical problems that are computationally difficult for classical computers to solve. However, quantum computers running Shor's algorithm could theoretically break these encryption schemes in record time, rendering them virtually useless. </p>  <p>This threat isn't merely theoretical. Major technology companies and governments are investing billions in quantum computing research, with IBM, Google, and others achieving significant quantum milestones. While large-scale, fault-tolerant quantum computers capable of breaking current encryption may still be years away, the "Y2Q" (Years to Quantum) countdown has already begun. Organizations must prepare now, as encrypted data stolen today could be decrypted once quantum computers maturea concept known as "harvest now, decrypt later" attacks. This article explains how quantum computing threatens current encryption methods and how modern databases are implementing quantum-resistant encryption algorithms to protect data from future quantum computer attacks.</p><h1 class="blog-sub-title">Guarding Agaist the Quantum Threat with Quantum-Resistant Encryption</h1><p>Quantum-resistant encryption, also called post-quantum cryptography (PQC), represents a new class of cryptographic algorithms designed to withstand attacks from both classical and quantum computers. Unlike current methods based on integer factorization or discrete logarithms, quantum-resistant algorithms rely on mathematical problems that remain difficult even for quantum computers.</p><p>The National Institute of Standards and Technology (NIST) has been leading the standardization effort, selecting several algorithms after rigorous evaluation. Key approaches include lattice-based cryptography (CRYSTALS-Kyber for key encapsulation, CRYSTALS-Dilithium for digital signatures), hash-based signatures (SPHINCS+), and code-based cryptography. These algorithms offer varying trade-offs between security, performance, and key sizes, allowing organizations to choose appropriate solutions for their specific needs.</p><h1 class="blog-sub-title">Modern Database Support for Quantum-Resistant Encryption</h1><p>Database vendors are proactively implementing quantum-resistant encryption to protect sensitive data. IBM DB2 has integrated CRYSTALS-Kyber and CRYSTALS-Dilithium algorithms, providing quantum-safe key exchange and digital signatures. Oracle Database has added post-quantum cryptography support in recent versions, focusing on protecting data at rest and in transit.</p><p>Microsoft SQL Server now supports NIST-approved quantum-safe algorithms, while PostgreSQL offers extensions for post-quantum encryption capabilities. Cloud database providers are also advancing quantum readinessAmazon RDS and Aurora participate in AWS's quantum-safe cryptography initiatives, Google Cloud SQL supports post-quantum TLS protocols, and Azure SQL Database implements Microsoft's quantum-resistant solutions.</p><p>Specialized databases like CockroachDB have built-in quantum-resistant algorithm support, while MongoDB Atlas and Apple's FoundationDB offer post-quantum encryption options. These implementations typically focus on three critical areas: encrypting data at rest, securing data in transit through quantum-safe TLS, and protecting authentication processes with quantum-resistant digital signatures.</p><h1 class="blog-sub-title">Navicat: Secure Database Administration in the Quantum Era</h1><p>As organizations transition to quantum-resistant encryption, reliable database administration tools become crucial for managing security implementations effectively. <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a>'s comprehensive database administration and development tools provide essential capabilities for working securely with modern databases. The platform supports secure connections across multiple database systems, enabling administrators to manage encrypted databases with confidence.</p><p>Navicat's tools facilitate secure database connections through advanced encryption protocols, helping database professionals implement and maintain security best practices. The platform's intuitive interface allows administrators to configure security settings, monitor database access, and ensure compliance with evolving cryptographic standards without compromising productivity or functionality.</p><h1 class="blog-sub-title">Conclusion</h1><p>The transition to quantum-resistant encryption represents one of the most significant security upgrades in computing history. As quantum computing advances, organizations cannot afford to waitthe time for preparation is now. Modern database systems are already implementing post-quantum cryptography, providing the foundation for long-term data security.</p><p>Success in this transition requires not only adopting quantum-resistant algorithms but also utilizing professional-grade database administration tools that support secure implementation and management. By combining quantum-safe encryption with robust database management practices, organizations can build resilient data infrastructure ready for the quantum future.</p></body></html>]]></description>
</item>
<item>
<title>Blockchain Databases: Where Innovation and Traditional Data Management Collide</title>
<link>https://www.navicat.com/company/aboutus/blog/3352-blockchain-databases-where-innovation-and-traditional-data-management-collide.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en">  <head>      <meta charset="UTF-8">      <title>Blockchain Databases: Where Innovation and Traditional Data Management Collide</title>  </head>  <body><b>Jul 4, 2025</b> by Robert Gravelle<br/><br/>  <p>Blockchain technology has rapidly evolved from its cryptocurrency origins to become a compelling data management system in its own right. Modern blockchain databases represent a significant advancement in how organizations approach data integrity, transparency, and security. These systems combine the benefits of distributed ledger technology with the functionality of traditional database management systems, creating hybrid solutions that address long-standing challenges in data governance. As enterprises increasingly seek solutions that provide immutable audit trails and verifiable transaction history, blockchain databases have emerged as a promising option that balances innovation with practical business requirements. This article describes how blockchain databases work and lists some of the most popular blockchain database solutions, along with some traditional alternatives implementing similar features. Finally, we'll examine how specialized tools like Navicat are helping organizations bridge these two worlds.</p>        <h1 class="blog-sub-title">Understanding Blockchain Databases</h1>  <p>Blockchain databases fundamentally differ from conventional databases in their architecture and operating principles. While traditional databases typically function as centralized repositories managed by a single authority, blockchain databases distribute data across multiple nodes in a network. Each transaction or data change is recorded in the ledger as a "block" that contains a cryptographic hash linking it to the previous block, creating an unalterable chain of information. This structure ensures that once data is recorded, it cannot be modified without consensus from the network, providing unprecedented levels of data integrity and auditability.</p>    <p>The core features that distinguish blockchain databases include immutability, decentralized consensus mechanisms, cryptographic verification, and transparent transaction history. These characteristics make blockchain databases particularly valuable for applications requiring robust audit trails, such as financial systems, supply chain management, and regulatory compliance.</p>    <h1 class="blog-sub-title">Leading Blockchain Database Solutions</h1>  <p>Several blockchain database platforms have gained prominence in the enterprise space. Here are just a few:</p>  <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li><strong>BigchainDB</strong> combines the scalability of traditional distributed databases with blockchain features like immutability and decentralized control. It's designed for use cases requiring high throughput while maintaining blockchain's core benefits.</li>        <li><strong>Hyperledger Fabric</strong>, developed under the Linux Foundation, offers a permissioned blockchain framework specifically designed for enterprise use. It supports complex queries, private channels for sensitive data, and modular architecture that allows for customizable consensus mechanisms.</li>        <li><strong>Amazon QLDB (Quantum Ledger Database)</strong> provides a centrally managed ledger database with an immutable and cryptographically verifiable transaction log. Though not fully decentralized, it offers many blockchain benefits without the complexity of managing a distributed network.</li>        <li><strong>FlureeDB</strong> represents a new generation of blockchain databases, integrating graph database capabilities with blockchain features. This allows for complex data relationships while maintaining verifiable history and time-travel queries.</li>        <li><strong>Blockstore</strong> implements a decentralized key-value store using blockchain principles, making it suitable for applications requiring simple data structures with strong integrity guarantees.</li>  </ul>    <h1 class="blog-sub-title">Traditional Databases with Blockchain-Like Features</h1>  <p>Traditional database vendors have recognized the value of blockchain's core principles and incorporated similar features into their products:</li>  <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li><strong>PostgreSQL</strong> can implement append-only tables and cryptographic verification through extensions like pg_crypto, enabling some blockchain-like capabilities within a familiar relational environment.</li>        <li><strong>MongoDB</strong> offers change streams and immutable field features that provide auditing capabilities similar to blockchain ledgers, although without distributed consensus.</li>        <li><strong>Oracle Blockchain Tables</strong> extend standard Oracle database functionality with immutability guarantees and cryptographic verification, allowing organizations to maintain familiar SQL interfaces while gaining some blockchain benefits.</li>        <li><strong>Microsoft SQL Server Ledger</strong> introduces tamper-evidence features through cryptographic verification of historical data, addressing compliance and audit requirements within a traditional database framework.</li>        <li><strong>Immudb</strong> provides an open-source immutable database with cryptographic verification without the full overhead of blockchain, striking a balance between conventional database performance and blockchain integrity.</li>  </ul>  <h1 class="blog-sub-title">Database Administration with Navicat</h1>  <p>For organizations implementing blockchain databases or blockchain-like features in traditional systems, effective database administration tools become essential. <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a>'s suite of database management and development tools has evolved to support these advanced database technologies. Navicat provides intuitive interfaces for connecting to and managing both traditional databases with blockchain features and dedicated blockchain database systems.</p>    <p>Navicat's visual query builders and data visualization capabilities help developers and administrators work effectively with complex blockchain data structures. The tool's robust security features align well with blockchain's emphasis on data integrity, offering encrypted connections and comprehensive access controls. For teams implementing hybrid database architectures that combine blockchain and traditional elements, Navicat's support for multiple database types within a single interface streamlines workflow and reduces the learning curve associated with new technologies.</p>    <h1 class="blog-sub-title">Conclusion</h1>  <p>Blockchain databases represent a significant evolution in data management, introducing principles of immutability and distributed verification that address crucial gaps in traditional systems. The key distinction remains that true blockchain databases distribute trust across multiple parties through decentralized consensus, while traditional databases implementing blockchain-like features maintain centralized control while adding verification layers.</p>    <p>As organizations evaluate their data management strategies, the choice between pure blockchain databases, traditional systems with blockchain features, or hybrid approaches will depend on specific requirements for performance, scalability, compliance, and governance. With the support of advanced administration tools like <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a>, teams can effectively implement and manage these sophisticated database solutions, leveraging the best of both worlds to create robust, verifiable data systems suitable for today's complex business environments.</p>  </body></html>]]></description>
</item>
<item>
<title>The Rise of Embedded AI/ML Capabilities in Modern Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/3350-the-rise-of-embedded-ai-ml-capabilities-in-modern-databases.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>  <title>The Rise of Embedded AI/ML Capabilities in Modern Databases</title></head><body><b>Jun 27, 2025</b> by Robert Gravelle<br/><br/>  <h1 class="blog-sub-title">Introduction</h1>  <p>The modern world is undergoing a significant transformation with the integration of Artificial Intelligence (AI) and Machine Learning (ML) capabilities into practically every facet of our lives.  The emerging trend of embedded AI/ML functionality has now made its way into database systems, forever changing how organizations process, analyze, and derive value from their data assets. Rather than extracting data from databases to perform analytics in separate environments, these new systems enable real-time insights and predictions within the database itself, eliminating data movement and accelerating time-to-insight. This article will explore how the embedding of AI/ML capabilities directly into database systems enables real-time analytics, eliminates data movement challenges, and democratizes access to advanced predictive capabilities across organizations.</p>  <h1 class="blog-sub-title">The Evolution of Database Intelligence</h1>  <p>Traditional database systems have primarily served as repositories for structured data storage and retrieval. Over time, they evolved to incorporate more advanced analytical capabilities, but these were often limited to aggregations, statistical functions, and basic pattern recognition. The latest evolution brings sophisticated machine learning algorithms directly into the database engine, creating a unified platform for both data management and advanced analytics.</p>  <p>This convergence addresses a fundamental challenge in the data science workflow: the constant movement of data between storage systems and analytical environments. By embedding AI/ML capabilities within the database itself, organizations can dramatically reduce latency, enhance security, and improve governance while maintaining data freshness.</p>  <h1 class="blog-sub-title">Key Capabilities and Benefits</h1>  <p>Embedded AI/ML in databases offers several transformative capabilities. Automated feature engineering can identify relevant patterns and relationships within datasets, reducing the manual effort traditionally required from data scientists. Real-time anomaly detection can continuously monitor incoming data streams, immediately flagging unusual patterns that might indicate fraud, system failures, or business opportunities.</p>  <p>Predictive analytics functions allow users to create and deploy models using SQL-like syntax, democratizing access to sophisticated forecasting capabilities. These models can be trained on historical data and automatically updated as new information arrives, maintaining their accuracy over time without external intervention.</p>  <p>From an operational standpoint, the benefits are substantial. Processing data where it resides eliminates the security risks associated with data movement between systems. It also reduces infrastructure complexity and costs by consolidating what were previously separate systems for storage and analytics. The simplified architecture leads to better governance, as security policies, access controls, and audit trails can be managed in a single environment.</p>  <h1 class="blog-sub-title">Leading Database Platforms Embracing AI/ML Integration</h1>  <p>Major database vendors have recognized this trend and are rapidly enhancing their offerings. Microsoft SQL Server has introduced Machine Learning Services, enabling R and Python code execution within the database engine. Oracle's Autonomous Database incorporates machine learning algorithms for self-tuning, security, and predictive analytics. PostgreSQL extensions like MADlib provide scalable in-database machine learning algorithms through SQL interfaces.</p>  <p>Cloud-native databases have been particularly quick to adopt these capabilities. Amazon Redshift ML allows users to create, train, and deploy machine learning models using SQL commands. Google BigQuery ML similarly enables machine learning model building directly in the data warehouse using standard SQL syntax, while Snowflake's Snowpark brings data science workloads directly to where data resides.</p>  <h1 class="blog-sub-title">Database Management Tools Incorporating AI</h1>  <p>Database management tools are also incorporating AI technologies to enhance user experience and productivity. These tools leverage artificial intelligence to assist database administrators and developers with query optimization, schema design, and data management tasks. One notable example is <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a>'s AI Assistant feature. Released in version 17.2, Navicat AI Assistant is an integrated tool that provides instant, contextual guidance and answers within a software application, leveraging artificial intelligence to help users solve problems, understand features, and improve their workflow through natural language interactions. Navicat's AI Assistant helps you write your SQL statements more efficiently. It does this by submitting your inquiries to the AI providers for processing, with responses sent exclusively back to the Navicat application installed on your local device. You can receive guidance from many of the popular AI chatbots, including ChatGPT, Google Gemini, DeepSeek, and Ollama.</p>  <h1 class="blog-sub-title">Conclusion</h1>  <p>The integration of AI/ML capabilities directly into database systems represents a natural evolution in data management technology. As organizations continue to grapple with exponentially growing data volumes and increasingly complex analytical requirements, embedded AI/ML functionality will become a standard feature rather than a differentiator.</p>  <p>This trend promises to democratize access to advanced analytics, allowing organizations of all sizes to derive actionable insights from their data assets without the complexity and expense of maintaining separate analytical infrastructures. As these technologies mature, we can expect even deeper integration between traditional database functions and cutting-edge AI/ML capabilities, further blurring the lines between data storage, management, and analysis.</p></body></html>]]></description>
</item>
<item>
<title>Immutable Databases: the Evolution of Data Integrity?</title>
<link>https://www.navicat.com/company/aboutus/blog/3347-immutable-databases-the-evolution-of-data-integrity.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Immutable Databases: the Evolution of Data Integrity?</title></head><body>    <b>Jun 23, 2025</b> by Robert Gravelle<br/><br/>    <p>In the evolving realm of database technology, immutable databases have emerged as a powerful new trend in data management that prioritizes data integrity and historical preservation. Unlike traditional databases where data can be modified or deleted, immutable databases only allow data addition, creating a permanent, tamper-proof record of all information. This article explores the rise of immutable databases and covers how database management tools like Navicat can help organizations effectively leverage these powerful capabilities.</p>    <h1 class="blog-sub-title">The Concept of Immutability</h1>    <p>Immutability in databases means that once data is written, it cannot be changed or deleted. Instead of updating or removing existing records, new versions are appended, preserving the complete history of changes. This append-only model ensures data integrity, simplifies auditing, and enables point-in-time recovery capabilities that traditional databases struggle to provide efficiently.</p>    <p>The immutable approach transforms how we think about data storage. Rather than maintaining the current state of data, immutable databases maintain the entire evolution of data over time. This shift brings significant advantages for compliance, security, and system reliability, particularly in industries where data provenance and auditability are critical.</p>    <h1 class="blog-sub-title">Notable Immutable Database Examples</h1>    <p>Several database systems have embraced immutability as their core design principle. Here are some of the main ones:</p>        <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">        <li><strong>Datomic</strong> pioneered the immutable database concept with its time-aware architecture. It stores all facts as datoms (atomic pieces of data) with time coordinates, allowing queries against any historical state without performance penalties.</li>            <li><strong>LMDB</strong> (Lightning Memory-Mapped Database) implements immutability through a copy-on-write mechanism, providing exceptional read performance and crash resilience.</li>            <li><strong>InfluxDB</strong>, primarily a time-series database, incorporates immutability for time-series data points, making it ideal for monitoring applications and systems where historical data must be preserved accurately.</li>            <li>Event sourcing databases like <strong>EventStoreDB</strong> maintain an immutable log of all events, allowing systems to reconstruct state at any point in time by replaying events from the beginning or from snapshots.</li>            <li>Blockchain databases like <strong>BigchainDB</strong>, <strong>Amazon Quantum Ledger Database</strong> (QLDB), and <strong>Hyperledger Fabric</strong> represent perhaps the most strict implementation of immutability, where the cryptographic linking of data blocks makes historical records practically impossible to alter without detection.</li>    </ul>        <h1 class="blog-sub-title">Traditional Databases Adopting Immutability</h1>    <p>Recognizing the benefits of immutability, many traditional database systems have begun incorporating immutable features:</p>    <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">        <li><strong>PostgreSQL</strong> has implemented time travel capabilities through extensions like Temporal Tables and pg_audit, allowing developers to query data as it existed at previous points in time.</li>            <li>Microsoft <strong>SQL Server</strong> introduced Temporal Tables in SQL Server 2016, providing built-in support for tracking historical data changes with period tables.</li>            <li><strong>Oracle</strong> database offers Flashback Query functionality, enabling users to view data as it existed at a specific time in the past without complex recovery procedures.</li>            <li><strong>MongoDB</strong> implemented Change Streams to provide applications with a real-time feed of data changes, preserving modification history in a way that mirrors some immutable database concepts.</li>            <li>Amazon's <strong>DynamoDB</strong> offers Point-in-Time Recovery features that maintain a complete change history for tables, allowing restoration to any second in the previous 35 days.</li>    </ul>        <h1 class="blog-sub-title">Database Management with Navicat</h1>    <p>When working with databases that incorporate immutable features, powerful database management tools become essential. <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a> stands out as a comprehensive solution that supports all major database systems implementing immutability concepts, including PostgreSQL, MySQL, MariaDB, SQL Server, Oracle, and MongoDB.</p>    <p>Navicat's intuitive interface allows database administrators to effectively manage temporal data and historical records created by immutable database features. Its visual query builder can construct complex queries against temporal tables, while its data modeling tools help design schemas that effectively incorporate immutability. For organizations transitioning to immutable data patterns, Navicat's synchronization and migration tools streamline the process of moving data between different database systems while preserving historical integrity.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>Immutable databases represent a fundamental shift in how we store, process, and think about data. By prioritizing the preservation of history and guaranteeing data integrity, they provide solutions to many challenges faced in data management today. As traditional database systems continue to adopt immutability features, and purpose-built immutable databases mature, organizations gain powerful new tools for compliance, auditing, and system resilience. With proper management tools like <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a>, leveraging these capabilities becomes accessible even to teams without specialized knowledge of immutable data structures.</p></body></html>]]></description>
</item>
<item>
<title>Seamless Information Access Through Data Virtualization and Federation</title>
<link>https://www.navicat.com/company/aboutus/blog/3345-seamless-information-access-through-data-virtualization-and-federation.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Seamless Information Access Through Data Virtualization and Federation</title></head><body><b>Jun 18, 2025</b> by Robert Gravelle<br/><br/>    <p>Modern enterprises face an unprecedented data management challenge. Organizations typically store their data across numerous systemscloud storage platforms, on-premises databases of various types, data warehouses, NoSQL repositories, SaaS applications, and specialized analytical systems. This data fragmentation creates significant obstacles for business users and analysts who need a comprehensive view of information to make decisions. Retrieving data from multiple systems requires mastering various query languages, understanding different data models, and manually integrating resultstasks too complex and time-consuming for most business users. The traditional solution of copying all data into a centralized repository creates its own problems: data duplication, staleness, increased storage costs, and complex synchronization processes. This article explores how data virtualization and federation technologies create a unified view of enterprise data scattered across disparate systems.</p>    <h1 class="blog-sub-title">What is Data Virtualization and Federation?</h1>    <p>Data virtualization represents a new approach to data integration that addresses these fundamental challenges. Rather than physically moving and consolidating data, data virtualization creates an abstraction layer that provides users and applications with unified, real-time access to data across disparate sources. This technology acts as a semantic layer that hides the technical complexities of underlying data systems, presenting a simplified view that users can interact with using familiar query tools and business intelligence interfaces. The virtualization engine translates user requests into source-specific queries, executes them across the relevant systems, and assembles the results into a coherent response - all while maintaining the illusion that users are working with a single, integrated data source.</p>    <p>Data federation functions as a fundamental architectural component within data virtualization solutions. Federation specifically addresses the mechanics of querying multiple heterogeneous data sources and combining their results. Federation engines decompose complex queries, determine which portions should be executed on which source systems, optimize these distributed query plans, and then reassemble the partial results. Modern federation technologies employ sophisticated optimization techniques, including pushing operations like filtering and aggregation down to source systems when possible, minimizing data transfer across networks, and caching frequently accessed data. Federation creates a virtual unified schema that maps fields from different systems into a coherent data model, handling complex transformations like field name standardization, data type conversion, and computational derivations.</p>    <h1 class="blog-sub-title">Business Benefits of Virtualization and Federation</h1>    <p>Implementing data virtualization and federation delivers several transformative business benefits. First, it dramatically accelerates time-to-insight by eliminating the need for physical data consolidation projects that often take months to complete. Business users gain immediate access to integrated views across systems, enabling faster decision-making. Second, these technologies reduce overall data management costs by minimizing unnecessary data replication and storage. Third, data virtualization enhances data governance by maintaining a single access point where security policies, data quality rules, and regulatory controls can be consistently applied. Perhaps most importantly, virtualization creates agilityas business requirements evolve, virtual views can be modified without disrupting the underlying systems or requiring extensive ETL modifications. This flexibility proves particularly valuable when integrating new data sources or adapting to organizational changes.</p>    <h1 class="blog-sub-title">Implementation Considerations and Challenges</h1>    <p>Successfully implementing data virtualization requires careful planning and awareness of potential challenges. Performance management represents the foremost concernfederated queries that span multiple systems inevitably introduce some latency compared to queries against a single optimized database. Organizations must develop strategies for managing this trade-off, such as implementing intelligent caching mechanisms, pre-aggregating commonly accessed data, or establishing clear performance expectations with users. Data security presents another critical consideration, as virtualization creates new access paths to sensitive information. Implementers must ensure that security controls remain consistent across the virtual layer and all underlying sources. Finally, organizations must recognize that virtualization complements rather than replaces other data integration approachessome use cases still benefit from physical consolidation, particularly those requiring historical analysis of large datasets or complex analytical processing.</p>    <h1 class="blog-sub-title">Tools for Data Virtualization and Federation</h1>    <p>Database management tools like <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a> can play a valuable supporting role in data virtualization and federation initiatives. While not a dedicated virtualization platform itself, Navicat provides capabilities that enhance the planning, implementation, and management phases of these projects. Its visual query builder allows database professionals to design and test complex federated queries across heterogeneous database environments. Navicat's schema comparison and synchronization features help maintain consistency across data sources that participate in federation schemas. The tool's support for multiple database typesincluding MySQL, PostgreSQL, SQL Server, Oracle, and MariaDBfacilitates the cross-platform data access essential to federation. Additionally, Navicat's data modeling capabilities assist in designing the unified semantic layer that makes virtualized data meaningful to business users, bridging the technical details of diverse sources with a coherent business-friendly representation.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>Data virtualization and federation technologies represent a strategic approach to enterprise data integration challenges. By creating a unified access layer that preserves the underlying distribution of data, these technologies enable organizations to balance the competing demands of data consolidation and specialization. While implementing virtualization requires careful consideration of performance, security, and governance factors, the resulting benefitsfaster time-to-insight, reduced data management costs, and enhanced organizational agilitymake it an essential component of modern data architecture. </p></body></html>]]></description>
</item>
<item>
<title>Database DevOps Integration: Bridging the Gap Between Development and Operations</title>
<link>https://www.navicat.com/company/aboutus/blog/3343-database-devops-integration-bridging-the-gap-between-development-and-operations.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Database DevOps Integration: Bridging the Gap Between Development and Operations</title></head><body> <b>Jun 13, 2025</b> by Robert Gravelle<br/><br/>    <p>In traditional software development workflows, database changes have often been treated as an afterthought. While application code follows well-defined DevOps practices with version control, automated testing, and continuous deployment, database changes frequently remain manual, risky operations performed by database administrators. This disconnect creates bottlenecks, introduces errors, and slows down the entire development process. Organizations find themselves unable to deliver value quickly when database changes become the limiting factor in deployments. In this article, we'll explore how integrating database changes into DevOps workflows creates a more seamless development pipeline, examining the challenges, components, benefits, and implementation strategies of Database DevOps.</p>    <h1 class="blog-sub-title">What is Database DevOps?</h1>    <p>Database DevOps extends DevOps principles to database management, treating database code with the same rigor and automation as application code. It aims to bridge the gap between developers and database administrators by implementing consistent processes for database changes throughout the application lifecycle. The core philosophy is that database changes should be version-controlled, tested automatically, and deployed through reliable, repeatable processes - just like application code.</p>    <h1 class="blog-sub-title">Key Components of Database DevOps</h1>    <p>The successful implementation of Database DevOps relies on several interconnected components. First, all database objectstables, views, stored procedures, and functionsmust be represented as scripts in a version control system like Git. This provides a single source of truth for the database schema and enables collaboration between team members.</p>        <p>Second, continuous integration pipelines should automatically validate database changes. This includes syntax checking, running static analysis tools to identify potential performance issues, and executing tests against a test database to verify that changes won't break existing functionality.</p>        <p>Third, Database DevOps requires automated deployment tools that can apply changes to databases in different environments. These tools must handle complex scenarios like data migrations, schema changes, and rollbacks while preserving data integrity.</p>        <p>Finally, monitoring and observability tools complete the feedback loop by providing insights into database performance and potential issues, allowing teams to make informed decisions about future improvements.</p>    <h1 class="blog-sub-title">Benefits of Database DevOps Integration</h1>    <p>Organizations that successfully implement Database DevOps experience numerous benefits. Development cycles accelerate as database changes no longer create bottlenecks in the deployment process. The risk of production issues decreases thanks to thorough automated testing and consistent deployment processes. Compliance improves through comprehensive change tracking and auditing capabilities. Team collaboration strengthens when developers and database administrators work together using shared tools and processes. Perhaps most importantly, businesses can respond more quickly to market changes and customer needs when database changes can be deployed rapidly and reliably.</p>    <h1 class="blog-sub-title">Implementation Strategies</h1>    <p>Implementing Database DevOps requires a strategic approach. Start small by identifying a suitable project or database for a pilot implementation. Focus initially on version-controlling your database schemas and building basic validation tests. As your team gains confidence, expand to include more complex elements like stored procedures and functions.</p>        <p>Invest in training for both developers and database administrators to ensure everyone understands the new processes and tools. Create clear guidelines for database changes, including naming conventions, documentation requirements, and review processes.</p>        <p>Consider adopting a migration-based approach where each change is represented as a discrete migration script that can be applied in sequence. This approach makes it easier to track changes and perform rollbacks if necessary.</p>    <h1 class="blog-sub-title">Tools for Database DevOps</h1>    <p>Successful Database DevOps implementation relies on appropriate tooling. Database management systems like <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat</a> provide many essential capabilities that support DevOps practices for databases:</p>    <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">     <li>Navicat offers schema comparison and synchronization features that help identify and deploy database changes systematically. </li>     <li>Its data modeling capabilities serve as part of the database design process, while query building and optimization features improve code quality.</li>      <li>Navicat's ability to generate SQL scripts that can be included in version control systems bridges the gap between database administration and development practices, making it a valuable component in a Database DevOps toolchain.</li>    </ul>        <h1 class="blog-sub-title">Conclusion</h1>    <p>Database DevOps integration represents a significant evolution in how organizations manage database changes. By applying the same principles of automation, version control, and continuous testing that have transformed application development, teams can eliminate the database bottleneck and achieve truly continuous delivery of value to their customers. While challenges exist, particularly around cultural change and legacy systems, the benefits of faster delivery, reduced risk, and improved collaboration make Database DevOps a worthwhile investment for any organization that relies on databases to deliver value.</p></body></html>]]></description>
</item>
<item>
<title>Navicat Sponsors SQLBits 2025 &ndash; Supporting the Future of Data Platforms</title>
<link>https://www.navicat.com/company/aboutus/blog/3333-navicat-sponsors-sqlbits-2025-&ndash;-supporting-the-future-of-data-platforms.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <meta charset="UTF-8">    <title>Navicat Sponsors SQLBits 2025  Supporting the Future of Data Platforms</title></head><body><b>Jun 6, 2025</b><br/><br/><img src="https://www.navicat.com/link/Blog/Image/2025/20250606/SQLBits-neon.jpg" width="800px">    <p>Were proud to announce that Navicat is a sponsor of SQLBits 2025, taking place in London from 1821 June. As the largest Data Platform conference in the world, SQLBits brings together over 200 leading experts, and a community committed to advancing the data industry.</p>    <p>Our sponsorship reflects our strong support for the ongoing evolution of modern data platforms and the professionals who build and manage them. With over 95% of the content being non-marketing driven, SQLBits offers invaluable real-world insights into todays most effective data strategies.</p>    <p>At Navicat, were dedicated to creating AI-powered database tools that enhance productivity, simplify management, and empower developers as part of a modern DevOps ecosystem. We remain committed to helping teams work smarter, faster, and more efficiently.</p><p> Learn more about the event at <a class="default-links" href="http://sqlbits.com/" target="_blank">sqlbits.com</a></p></body></html>]]></description>
</item>
<item>
<title>Edge Databases: Empowering Distributed Computing Environments</title>
<link>https://www.navicat.com/company/aboutus/blog/3331-edge-databases-empowering-distributed-computing-environments.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Edge Databases: Empowering Distributed Computing Environments</title></head><body> <b>May 30, 2025</b> by Robert Gravelle<br/><br/>    <p>Edge computing has revolutionized how we process data by bringing computation closer to data sources. As organizations deploy more IoT devices, mobile applications, and distributed systems, the need for efficient edge database solutions has grown significantly. These specialized databases are designed to operate effectively on devices with limited processing power, memory, and network connectivity while ensuring data remains available and processable even when disconnected from central servers. Edge databases represent a fundamental shift in how we think about data architecture, enabling real-time processing and analytics where data is generated rather than requiring constant transmission to distant data centers. This article explores the emerging field of edge database solutions, examining how these specialized data management systems are designed to operate efficiently on devices with limited resources at the network periphery, comparing their unique benefits to traditional database approaches, and highlighting key technologies that enable local data processing and synchronization in disconnected or bandwidth-constrained environments.</p>    <h1 class="blog-sub-title">What Are Edge Databases?</h1>    <p>Edge databases are specialized data management systems optimized to run on edge devices such as smartphones, IoT sensors, retail terminals, manufacturing equipment, and other computing devices operating at the network periphery. Unlike traditional database systems that assume consistent connectivity and substantial computing resources, edge databases are engineered with different priorities. They're designed to be lightweight with minimal resource consumption, support offline operations, synchronize efficiently when connectivity is available, and provide reliable local data processing capabilities regardless of connection status.</p>        <p>These databases typically implement sophisticated data synchronization mechanisms that can resolve conflicts when devices reconnect after operating independently. They often employ intelligent data prioritization to ensure critical information is processed first when bandwidth is limited. The architecture of edge databases emphasizes fault tolerance and resilience, acknowledging the challenging and often unpredictable environments in which edge devices operate.</p>    <h1 class="blog-sub-title">Benefits of Edge Database Solutions</h1>    <p>Edge databases deliver several significant advantages over traditional centralized approaches:</p>        <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>Latency reduction stands as perhaps the most immediate benefit - by processing data locally, applications can respond in real-time without waiting for round-trip communication with distant servers. This speed improvement proves crucial for time-sensitive applications like industrial control systems, autonomous vehicles, or medical devices where milliseconds matter.</li>        <li>Privacy and security improve substantially as sensitive data can be processed locally without transmission across networks. This localized approach helps organizations comply with data sovereignty requirements and reduces overall vulnerability to network-based attacks.</li>        <li>Bandwidth consumption decreases dramatically as only necessary data needs transmission to central systems rather than raw data streams. This efficiency translates directly to cost savings, particularly important in environments with metered or expensive connectivity.</li>        <li>Reliability improves as applications continue functioning during network outages or in regions with inconsistent connectivity. This resilience ensures continuous operation in remote locations, developing regions, or crisis scenarios where network infrastructure may be compromised.</li>    </ul>        <h1 class="blog-sub-title">Popular Edge Database Solutions</h1>    <p>Several database technologies have emerged specifically designed for edge computing scenarios:</p>    <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">      <li>SQLite stands as perhaps the most widely deployed embedded database, powering countless applications across mobile devices and embedded systems. Its small footprint (approximately 600KB) and self-contained design make it ideal for edge deployments while still offering robust SQL capabilities.</li>            <li>CouchDB and its mobile variant PouchDB provide powerful document-oriented databases with sophisticated synchronization mechanisms. Their multi-master replication allows multiple edge devices to operate independently and later reconcile changes seamlessly.</li>            <li>RxDB combines reactive programming principles with offline-first architecture, making it particularly well-suited for progressive web applications and mobile scenarios. Its observable queries automatically update user interfaces when underlying data changes.</li>            <li>Firebase Realtime Database offers real-time synchronization capabilities with offline support, simplifying development while handling complex networking challenges transparently.</li>            <li>Berkeley DB provides a high-performance embedded database requiring minimal configuration while offering advanced features like transactions and recovery.</li>    </ul>    <h1 class="blog-sub-title">Edge Databases vs. Traditional Solutions</h1>    <p>Traditional database systems like MySQL, PostgreSQL, and SQL Server were designed assuming consistent network connectivity, steady power supply, and substantial computing resources. These assumptions make them poorly suited for edge environments where intermittent connectivity and resource constraints are the norm.</p>        <p>Cloud database services like Amazon DynamoDB, Google Cloud Spanner, and Azure Cosmos DB offer powerful capabilities but generally require consistent connectivity to function properly. While these services increasingly offer offline capabilities, they still primarily operate under a centralized model.</p>        <p>Edge databases, in contrast, prioritize local operation first, with synchronization as a secondary concern. They employ sophisticated conflict resolution mechanisms that traditional databases often lack, handling the reality that multiple devices may independently modify the same data while disconnected.</p>    <h1 class="blog-sub-title">Management Tools for Edge Databases</h1>    <p>Managing distributed edge databases presents unique challenges compared to centralized systems. Administrators need visibility into device status, synchronization health, and data consistency across potentially thousands of endpoints. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> may be used to manage edge databases, offering tools to monitor synchronization status, troubleshoot replication conflicts, and ensure data integrity across distributed systems. Having proper management tools becomes essential as edge deployments scale to ensure system reliability and data consistency.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>Edge database solutions represent a critical evolution in data management philosophy, recognizing that not all data processing must occur in centralized clouds. As edge computing continues expanding across industries, these specialized databases will play an increasingly vital role in enabling responsive, resilient applications that work reliably regardless of network conditions. Organizations implementing edge strategies should carefully evaluate database options based on their specific requirements for synchronization, offline capability, and resource efficiency to build truly effective distributed systems.</p></body></html>]]></description>
</item>
<item>
<title>The Rise of Low-Code/No-Code Database Interfaces: Democratizing Data Management</title>
<link>https://www.navicat.com/company/aboutus/blog/3329-the-rise-of-low-code-no-code-database-interfaces-democratizing-data-management.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <meta charset="UTF-8">    <title>The Rise of Low-Code/No-Code Database Interfaces: Democratizing Data Management</title></head><body><b>May 22, 2025</b> by Robert Gravelle<br/><br/><p>As the volume of data collected continues to increase at an exponential rate, the ability to effectively manage and analyze information has become critical across virtually every industry. Traditionally, working with databases required specialized technical skills, including proficiency in Structured Query Language (SQL) and database architecture principles. However, the emergence of low-code and no-code database interfaces is fundamentally transforming how organizations interact with their data assets. These innovative platforms empower business users, analysts, and even technical professionals to accomplish sophisticated database tasks with minimal manual coding, effectively democratizing access to data management capabilities while accelerating development cycles. This article explores how low-code and no-code database interfaces are transforming data management, examining their key benefits, organizational impact, and how tools like Navicat empower users to accomplish sophisticated database tasks with minimal manual coding.</p><h1 class="blog-sub-title">Low-Code versus No-Code Database Tools</h1><p>Low-code and no-code database interfaces represent different points on a spectrum of tools designed to reduce the complexity of database operations. No-code solutions eliminate coding requirements entirely, typically offering intuitive visual interfaces where users can design databases, create queries, and build applications through drag-and-drop functionality and pre-built components. These platforms are ideal for business users who lack programming expertise but need to create functional database applications rapidly.</p><p>Low-code interfaces, meanwhile, strike a balance between visual development and traditional coding. They provide graphical tools for common database tasks while still allowing developers to insert custom code when necessary for more complex operations. This hybrid approach enables technical professionals to dramatically accelerate their workflow while maintaining the flexibility to address unique requirements that purely visual tools might not accommodate.</p><h1 class="blog-sub-title">Key Benefits for Organizations</h1><p>The adoption of low-code/no-code database interfaces offers numerous advantages for organizations of all sizes. Development speed increases substantially, with projects that might have taken months now potentially completed in days or weeks. This acceleration is particularly valuable in rapidly evolving business environments where the ability to quickly adapt data systems provides a competitive edge. The democratization effect is equally significant, as these platforms enable domain experts without technical backgrounds to create and modify database applications that address their specific needs, without depending entirely on IT departments.</p><p>From a resource perspective, low-code/no-code solutions reduce the technical debt that often accumulates in traditional development environments. By generating standardized, maintainable code automatically, these platforms help organizations avoid the pitfalls of inconsistent coding practices. Additionally, they allow organizations to allocate their specialized database developers to more complex, high-value projects while empowering other team members to handle routine database tasks independently.</p><h1 class="blog-sub-title">Navicat: Pioneering Low-Code Database Management</h1><p>Among the leading solutions in this space, <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> stands out for its comprehensive approach to low-code database management across multiple database platforms. Navicat offers several powerful features for database developers and administrators alike that significantly reduce the need to write SQL manually. For instance, its Visual Query Builder transforms the query creation process from a coding exercise to an intuitive visual experience, allowing users to construct complex SQL statements by simply dragging tables, defining joins, and selecting conditions through a graphical interface.</p><p>Navicat's data modeling tools further exemplify the low-code philosophy, enabling users to design database schemas visually and automatically generate the corresponding SQL for table creation and relationships. For data migration and synchronization tasks that would typically require extensive scripting, Navicat provides streamlined wizards that guide users through the process with minimal coding requirements. Additionally, its Stored Procedure Builder allows for the visual creation of database procedures and functions, abstracting away much of the complexity inherent in procedural SQL.</p><p>For routine database administration, Navicat reduces SQL writing through its comprehensive management interface, where common tasks like user privilege management, index creation, and performance monitoring can be accomplished through intuitive dialogues rather than manual commands. This low-code approach not only increases productivity but also reduces the likelihood of syntax errors that often occur when writing complex SQL statements manually.</p><h1 class="blog-sub-title">The Future Landscape</h1><p>As artificial intelligence and machine learning capabilities continue to evolve, low-code/no-code database interfaces are poised to become even more powerful. The integration of intelligent assistants that can suggest optimizations, predict user intentions, and even generate complex queries based on natural language descriptions represents the next frontier in this space. Furthermore, these platforms will likely incorporate more sophisticated data analysis capabilities, enabling users not just to manage data but to derive actionable insights through visual analytics tools.</p><p>Low-code/no-code database interfaces are revolutionizing how organizations interact with their data assets by removing technical barriers and accelerating development cycles. Solutions like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> exemplify how these platforms can empower users across technical skill levels to accomplish sophisticated database tasks efficiently. As these technologies continue to mature, they will play an increasingly pivotal role in helping organizations leverage their data for competitive advantage in an increasingly data-driven business landscape.</p></body></html>]]></description>
</item>
<item>
<title>Data Vault 2.0: A Modern Approach to Enterprise Data Modeling</title>
<link>https://www.navicat.com/company/aboutus/blog/3323-data-vault-2-0-a-modern-approach-to-enterprise-data-modeling.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <meta charset="UTF-8">    <title>Data Vault 2.0: A Modern Approach to Enterprise Data Modeling</title></head><body><b>May 16, 2025</b> by Robert Gravelle<br/><br/>    <p>Today, organizations face unprecedented challenges in managing vast amounts of information from diverse sources. Traditional data modeling approaches often struggle to keep pace with the volume, variety, and velocity of modern data requirements. Data Vault 2.0 is a modern data modeling methodology specifically designed to address these challenges, offering a flexible, scalable, and auditable approach to enterprise data modeling. This article explores the core principles, components, and benefits of Data Vault 2.0, highlighting why it has become increasingly popular for large-scale data warehousing projects.</p>    <h1 class="blog-sub-title">Origins and Evolution</h1>  <p>Data Vault methodology was originally developed by Dan Linstedt in the early 2000s as a response to the limitations of traditional approaches like Kimball's dimensional modeling and Inmon's normalized models. Data Vault 1.0 introduced the core concepts of hubs, links, and satellites, creating a framework that separated business keys, relationships, and descriptive attributes. Data Vault 2.0, introduced around 2013, represents a significant evolution of the original methodology, incorporating best practices for big data, cloud computing, and agile development processes. It expanded beyond just a data modeling technique to become a comprehensive system for enterprise data warehousing.</p>    <h1 class="blog-sub-title">Core Components of Data Vault 2.0</h1>  <p>The Data Vault 2.0 architecture consists of three fundamental building blocks that form the backbone of its modeling approach: </p>  <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li>Hubs represent business keys and core business concepts, serving as stable anchors in the model. They contain minimal information - primarily business keys and their metadata.</li>     <li>Links capture relationships between business keys, representing associations between different business entities. They are essentially many-to-many relationship tables that connect two or more hubs. </li>    <li>Satellites store descriptive attributes and context about hubs or links, including historical changes. They contain time-stamped descriptive information, enabling the tracking of how data evolves over time.</li>  </ul>     <p>This three-component structure creates a highly flexible model that can adapt to changing business requirements without requiring significant restructuring. By separating business keys from relationships and descriptive information, Data Vault 2.0 achieves a level of modularity that facilitates parallel development and integration of new data sources.</p>    <h1 class="blog-sub-title">Key Principles and Benefits</h1>  <p>Data Vault 2.0 is guided by several core principles that distinguish it from other data modeling methodologies. The approach is designed around auditability, tracking all data from source to target with complete lineage. It emphasizes scalability through its modular design, allowing organizations to expand their data warehouse incrementally without disrupting existing structures. The methodology supports adaptability to changing business requirements, a crucial advantage in today's dynamic business environment.</p>    <p>Organizations implementing Data Vault 2.0 often report significant benefits. The methodology enables faster integration of new data sources, sometimes reducing implementation time by 30-40% compared to traditional approaches. It provides enhanced traceability and compliance capabilities, which are increasingly important in regulated industries. Perhaps most importantly, Data Vault 2.0 creates resilient data structures that can evolve alongside the business, protecting the substantial investment that organizations make in their data infrastructure.</p>    <h1 class="blog-sub-title">Implementation Considerations</h1>  <p>While Data Vault 2.0 offers compelling advantages, implementing it requires careful planning and consideration. Organizations typically need to invest in appropriate tools and training to successfully adopt the methodology. The approach works best when implemented with automation tools that can generate and maintain the model structures, as the number of tables can grow significantly compared to other methodologies. Teams often benefit from specialized expertise, particularly during the initial phases of implementation.</p>    <h1 class="blog-sub-title">Navicat Data Modeler and Data Vault 2.0</h1>  <p><a class="default-links" href="https://www.navicat.com/en/products/navicat-data-modeler" target="_blank">Navicat Data Modeler</a> stands out as a powerful tool for organizations implementing Data Vault 2.0. It's ideal for designing complex data systems for various applications using Relational, Dimensional, and Data Vault 2.0 methodologies, ranging from transactional systems and operational databases to analytical platforms and data warehousing solutions. You can also use Navicat Data Modeler to effectively visualize data structures and relationships, making it easier to identify optimization opportunities and ensure alignment with business objectives.</p>    <h1 class="blog-sub-title">Conclusion</h1>  <p>Data Vault 2.0 represents a sophisticated approach to enterprise data modeling that addresses many of the limitations of traditional methodologies. By providing a flexible, scalable, and auditable framework, it enables organizations to create data warehouses that can adapt to changing business needs while maintaining historical accuracy and data lineage. As data continues to grow in both volume and strategic importance, methodologies like Data Vault 2.0 will play an increasingly crucial role in helping organizations derive maximum value from their information assets.</p></body></html>]]></description>
</item>
<item>
<title>Streaming-First Architectures: Revolutionizing Real-Time Data Processing</title>
<link>https://www.navicat.com/company/aboutus/blog/3312-streaming-first-architectures-revolutionizing-real-time-data-processing.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <meta charset="UTF-8">    <title>Streaming-First Architectures: Revolutionizing Real-Time Data Processing</title></head><body>    <b>May 9, 2025</b> by Robert Gravelle<br/><br/>    <p>In recent years, traditional database systems have been struggling to keep pace with the demands of real-time analytics, IoT applications, and instantaneous decision-making, due to the increasingly complex and fast-moving data environments of modern organizations. Designed around batch processing and static data models, RDBMSes were simply not designed to handle data processing in real-time. Streaming-first architectures represent a fundamental shift in how data is captured, processed, and utilized, prioritizing continuous data flow and immediate insights over historical, retrospective analysis. This article details the rise of streaming-first architectures, examining how these innovative approaches are reshaping data processing by enabling real-time insights, continuous event streaming, and immediate actionable intelligence across diverse industries.</p>    <h1 class="blog-sub-title">From Batch Processing to Streaming</h1>    <p>The shift towards streaming-first architectures is rooted in the limitations of traditional database approaches. Historically, organizations relied on batch processing, where data would be collected, stored, and then analyzed during specific intervals. This method worked well when business cycles were slower and data volumes were more manageable. However, the digital transformation has created an environment where data is generated continuously, from millions of sources including social media, IoT devices, financial transactions, and real-time monitoring systems. Streaming-first architectures address this challenge by treating data as a continuous flow of events, allowing for immediate processing and analysis as information is generated.</p>    <h1 class="blog-sub-title">Pioneering Streaming Platforms</h1>    <p>Apache Kafka has emerged as the front-runner in streaming-first architectures, revolutionizing how organizations approach data integration and real-time processing. Originally developed by LinkedIn, Kafka provides a distributed streaming platform that can handle massive volumes of data with exceptional reliability and scalability. Companies like Uber, Netflix, and Airbnb have built entire data infrastructures around Kafka's event streaming capabilities. Apache Flink offers another powerful solution, providing sophisticated stream processing with strong consistency guarantees. These platforms enable organizations to build complex event-driven systems that can react to data in real-time, transforming how businesses make decisions and respond to changing conditions.</p>    <h1 class="blog-sub-title">Traditional Databases Embrace Streaming</h1>    <p>Recognizing the importance of streaming capabilities, many traditional database systems have begun integrating native support for streaming architectures: </p>    <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>PostgreSQL, for instance, has developed extensions like pg_stream that enable real-time data ingestion and processing.</li>     <li>MongoDB introduced change streams, allowing applications to access real-time data changes without the complexity of tailing the oplog. </li>    <li>Oracle Database provides Oracle Stream Analytics, which enables complex event processing and real-time insights.</li>     <li>Microsoft SQL Server has developed its own streaming capabilities through Azure Stream Analytics, allowing seamless integration of streaming data with traditional database operations.</li>    </ul>    <h1 class="blog-sub-title">Industry-Specific Applications</h1>    <p>The impact of streaming-first architectures extends across multiple industries: </p>    <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>Financial institutions use these technologies for real-time fraud detection and algorithmic trading. Manufacturing companies leverage streaming data for predictive maintenance and quality control.</li>     <li>Healthcare providers implement streaming architectures to monitor patient data and enable immediate interventions. </li>    <li>E-commerce platforms use streaming technologies to personalize user experiences and manage inventory in real-time. </li>    </ul>    <p>The ability to process and act on data instantaneously has transformed these industries, creating competitive advantages for organizations that can effectively implement streaming-first approaches.</p>    <h1 class="blog-sub-title">Management and Monitoring Challenges</h1>    <p>For organizations working with these complex streaming databases and platforms, management tools have become increasingly important. For instance, <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> provides support for managing various streaming databases, offering interfaces that can connect to and monitor different streaming platforms. This allows database administrators to oversee and optimize their streaming architectures, providing crucial visibility into data flows and system performance across different technologies and environments.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>Streaming-first architectures represent more than just a technological trend - they signify a fundamental shift in how organizations conceptualize and utilize data. As the volume and velocity of data continue to increase, these architectures will become increasingly critical for businesses seeking to maintain competitive advantages. The ability to process and act on data in real-time is no longer a luxury but a necessity in our rapidly evolving data-driven world.</p></body></html>]]></description>
</item>
<item>
<title>Navicat Proudly Sponsors PGConf.de 2025 as Silver Sponsor (Two Free Tickets Up for Grabs!)</title>
<link>https://www.navicat.com/company/aboutus/blog/3311-navicat-proudly-sponsors-pgconf-de-2025-as-silver-sponsor-two-free-tickets-up-for-grabs.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <meta charset="UTF-8">    <title>Navicat Proudly Sponsors PGConf.de 2025 as Silver Sponsor (Two Free Tickets Up for Grabs!)</title></head><body><b>May 6, 2025</b> by Robert Gravelle<br/><br/>    <p>We are excited to announce that Navicat is joining the PostgreSQL Conference Germany 2025 as a Silver sponsor! As part of our ongoing commitment to the PostgreSQL community, we are proud to support this premier event and help foster innovation and collaboration among database professionals.</p>    <h1 class="blog-sub-title">Event Details:</h1>    <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Event: <a class="default-links" href="https://2025.pgconf.de/" target="_blank">PostgreSQL Conference Germany 2025</a></li><li>Date: May 89, 2025</li><li>Venue: Berlin Marriott Hotel, Berlin, Germany</li></ul>    <p>As a Silver sponsor, Navicat has been given two complimentary attendee vouchers for the conference. We want to share this opportunity with our community-so we are giving away two free tickets! If you are interested in attending PostgreSQL Conference Germany 2025, simply contact us at <a class="default-links" href="mailto:media@navicat.com">media@navicat.com</a> for your chance to receive a ticket. Tickets will be distributed on a first-come, first-served basis.</p>    <p>Navicat is a strong supporter of the PostgreSQL community and is committed to backing more PostgreSQL events around the world. Stay tuned to our blog for updates on future events and more chances to win free tickets!</p></body></html>]]></description>
</item>
<item>
<title>Recent Innovations in the Realm of Database-as-a-Service</title>
<link>https://www.navicat.com/company/aboutus/blog/3309-recent-innovations-in-the-realm-of-database-as-a-service.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <meta charset="UTF-8">    <title>Recent Innovations in the Realm of Database-as-a-Service</title></head><body><b>Apr 28, 2025</b> by Robert Gravelle<br/><br/>    <p>Database-as-a-Service (DBaaS) has been a cornerstone of cloud computing for over a decade, but recent developments have significantly expanded its capabilities and reach. While the core concept of delivering managed database services in the cloud is not new, the past few years have witnessed remarkable innovations that are reshaping how organizations approach data management. This article explores several noteworthy advancements in the DBaaS landscape, from the emergence of truly serverless database offerings to the integration of artificial intelligence for autonomous operations. We'll examine how these developments are transforming the economics of database management, enabling new use cases, and providing organizations with unprecedented flexibility in how they deploy and manage their data infrastructure across multiple environments.</p>    <h1 class="blog-sub-title">The Rise of Serverless Databases</h1>    <p>Perhaps the most transformative recent trend in DBaaS is the rise of truly serverless database offerings. Unlike earlier cloud database models that required some level of capacity planning, serverless databases automatically scale compute and storage resources in response to workload demands - all the way down to zero during periods of inactivity. AWS Aurora Serverless, Azure SQL Database serverless, and MongoDB Atlas Serverless have pioneered this approach, introducing consumption-based pricing models that align costs directly with actual usage. This model eliminates the need for capacity planning and removes the overhead of managing database resources, allowing development teams to focus entirely on application logic rather than infrastructure concerns.</p>    <h1 class="blog-sub-title">AI-Powered Database Management</h1>    <p>The integration of artificial intelligence and machine learning capabilities directly into database services represents another frontier in DBaaS evolution. Cloud providers now offer databases with built-in intelligence for query optimization, anomaly detection, and predictive scaling. Oracle Autonomous Database, for instance, uses machine learning to automate routine administration tasks like tuning, security patching, and backup, while Microsoft's Azure SQL Database employs AI to detect potential performance issues before they impact applications. These intelligent capabilities effectively transform databases from passive data repositories into active systems that continuously optimize themselves without human intervention.</p>    <h1 class="blog-sub-title">Multi-Cloud and Hybrid Deployments</h1>    <p>Multi-cloud and hybrid cloud database solutions have emerged as a response to growing concerns about vendor lock-in and the need for deployment flexibility. Services like CockroachDB, MongoDB Atlas, and DataStax Astra now provide consistent database experiences across multiple cloud environments and on-premises infrastructure. This approach gives organizations the freedom to deploy databases wherever makes the most business sense while maintaining operational consistency. For global enterprises with diverse regulatory requirements or legacy infrastructures, these multi-cloud databases offer a path to cloud adoption that doesn't compromise on deployment flexibility or data ownership concerns.</p>    <h1 class="blog-sub-title">Specialized Database Services</h1>    <p>The specialized database revolution continues to accelerate in the DBaaS space, with purpose-built database services optimized for specific data models and workloads. Time series databases like InfluxDB Cloud and TimescaleDB address the unique requirements of temporal data. Graph databases such as Neo4j Aura and Amazon Neptune provide native support for relationship-centric data models. Vector databases including Pinecone and Weaviate deliver high-performance similarity search for AI applications. This specialization trend acknowledges that different data workloads have distinct requirements that general-purpose databases struggle to address efficiently, leading to a variety of purpose-built services tailored to specific use cases.</p>    <h1 class="blog-sub-title">Unified Database Management Tools</h1>    <p>For organizations working with these diverse cloud database services, management tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> have evolved to provide unified interfaces for working with multiple database platforms across different cloud environments. Navicat supports connections to various cloud databases including Amazon RDS, Azure SQL Database, and Google Cloud SQL, allowing database administrators to seamlessly manage their cloud databases alongside on-premises systems. This centralized approach to database management significantly streamlines operations for teams working with heterogeneous database environments, providing consistent tools for schema design, query execution, and performance monitoring across the increasing varieties of cloud database services.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>As we look toward the future of DBaaS, the line between different database models will likely continue to blur as services incorporate multiple data models within unified platforms. The emphasis on operational simplicity, automatic optimization, and consumption-based models will only strengthen as cloud providers compete to deliver optimal data management experiences. For organizations embarking on digital transformation initiatives, these advancements in DBaaS technology offer unprecedented opportunities to harness the power of data without the traditional burdens of database administration.</p></body></html>]]></description>
</item>
<item>
<title>Bridging Worlds: How Traditional Databases and Time-Series Solutions Work Together</title>
<link>https://www.navicat.com/company/aboutus/blog/3307-bridging-worlds-how-traditional-databases-and-time-series-solutions-work-together.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Bridging Worlds: How Traditional Databases and Time-Series Solutions Work Together</title></head><body><b>Apr 17, 2025</b> by Robert Gravelle<br/><br/>         <p>Time-Series Databases (TSDBs) have emerged as a specialized solution to one of modern computing's most significant challenges: the efficient storage, retrieval, and analysis of time-based data. As organizations' collection of data from sensors, applications, and systems that generate readings at regular intervals have increased, the limitations of traditional database systems for handling this type of data have become apparent. </p>        <p>Traditional relational database management systems (RDBMS) were designed for transactional workloads where relationships between different entities matter more than the temporal aspect of the data. While these systems can certainly store time-stamped data, they aren't optimized for the high-frequency writes, temporal queries, and data lifecycle management associated with time-series workloads. This limitation created the need for purpose-built solutions that could handle the unique characteristics of time-series data. This article explores how traditional and time-series database technologies integrate and complement each other, examining various implementation approaches.</p>    <h1 class="blog-sub-title">The Integration of Traditional and Time-Series Databases</h1>    <p>The evolution of TSDBs hasn't occurred in isolation from traditional database technologies. Rather, there has been a gradual integration of time-series capabilities into existing database frameworks, as well as the development of standalone systems that borrow concepts from traditional databases. This symbiotic relationship has led to a spectrum of solutions ranging from pure-play TSDBs to traditional databases with time-series extensions.</p>        <p>One of the most notable examples of this integration is TimescaleDB, which extends PostgreSQL to handle time-series data efficiently. By building on PostgreSQL's solid foundation, TimescaleDB inherits the reliability, SQL compatibility, and rich ecosystem of a mature RDBMS while adding specialized time-based indexing, automated partitioning, and optimized compression algorithms. This hybrid approach allows organizations to maintain a single database system for both relational and time-series data, reducing operational complexity.</p>        <p>Similarly, major database vendors like Microsoft and Oracle have incorporated time-series capabilities directly into their flagship products. Microsoft SQL Server offers temporal tables that track the history of data changes over time, while Oracle Database includes features specifically designed for managing time-series data within the context of a traditional RDBMS.</p>        <h1 class="blog-sub-title">Complementary Approaches and Cloud Solutions</h1>    <p>Beyond extensions to existing systems, many organizations adopt a complementary approach where traditional databases and dedicated TSDBs coexist within their data architecture. In these scenarios, operational data might reside in a traditional RDBMS like MySQL or Oracle, while high-frequency metrics, logs, and other time-stamped data are routed to specialized TSDBs like InfluxDB, Prometheus, or Graphite. Integration layers, often implemented through ETL (Extract, Transform, Load) processes or API-based data exchange, ensure that information can flow between these systems when cross-domain queries are required.</p>        <p>The rise of cloud computing has further blurred the lines between traditional and time-series databases. Managed services like Amazon Timestream, Azure Data Explorer, and Google Cloud's BigQuery are designed to handle time-series workloads at scale while maintaining compatibility with traditional SQL-based query languages. These services abstract much of the underlying complexity, allowing developers to work with time-series data by leveraging familiar concepts from traditional database systems.</p>        <h1 class="blog-sub-title">Managing Diverse Database Ecosystems with Navicat</h1>    <p>For database administrators and developers tasked with managing this increasingly diverse systems, tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> provide a unified interface for interacting with multiple databases. Navicat's versatility allows it to connect to both traditional RDBMS platforms like MySQL, PostgreSQL, and SQL Server, as well as newer time-series focused systems that offer SQL-compatible interfaces. Through Navicat, administrators can visually design schemas, write and test queries, and monitor performance across their entire database network.</p>        <h1 class="blog-sub-title">Conclusion</h1>    <p>The relationship between traditional databases and time-series databases is not one of replacement but of evolution and integration. Organizations today have multiple options for handling time-series data, from specialized standalone solutions to extensions of familiar database systems. As data volumes continue to grow and real-time analytics become increasingly important, we can expect further innovation in how these systems interact and complement each other. The ability to effectively manage these diverse database technologies through tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> will remain crucial for organizations seeking to derive maximum value from their time-based data.</p></body></html>]]></description>
</item>
<item>
<title>How Modern Databases Are Advancing Data Privacy Protection</title>
<link>https://www.navicat.com/company/aboutus/blog/3302-how-modern-databases-are-advancing-data-privacy-protection.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>How Modern Databases Are Advancing Data Privacy Protection</title></head><body>  <b>Apr 14, 2025</b> by Robert Gravelle<br/><br/>    <p>As organizations face increasing pressure to protect sensitive data while making it accessible to those who need it, database systems have evolved to incorporate sophisticated privacy-preserving features. These advancements represent a fundamental shift in how we approach data security, moving beyond simple encryption to provide comprehensive protection throughout the data lifecycle. This article explores how modern databases implement privacy protection and examines the practical implications for organizations managing sensitive information.</p>    <h1 class="blog-sub-title">The Evolution of Database Privacy</h1>    <p>Database privacy has evolved significantly from the early days of basic access controls. Modern privacy-preserving features operate at multiple levels, from the individual field to entire databases, providing granular control over how sensitive information is stored, accessed, and used. This layered approach ensures that organizations can maintain data utility while meeting stringent privacy requirements and regulatory standards.</p>    <h1 class="blog-sub-title">PostgreSQL's Advanced Privacy Features</h1>    <p>PostgreSQL leads the way in open-source database privacy with its comprehensive suite of security features. Its row-level security policies allow organizations to implement fine-grained access controls based on user context, ensuring that individuals can only access the specific data rows they're authorized to see. Column-level encryption adds another layer of protection by securing sensitive fields while keeping others accessible for analysis. These features enable organizations to implement sophisticated privacy policies without sacrificing database functionality.</p>    <h1 class="blog-sub-title">Oracle's Enterprise Privacy Solutions</h1>    <p>Oracle's database system brings enterprise-grade privacy features to large organizations. Its Transparent Data Encryption protects data at rest without requiring application changes, while Data Redaction enables dynamic masking of sensitive information based on user context. The Oracle Database Vault provides additional protection by restricting privileged user access to application data, preventing even database administrators from viewing sensitive information when unnecessary.</p>    <h1 class="blog-sub-title">Microsoft SQL Server's Privacy Innovation</h1>    <p>Microsoft SQL Server has introduced groundbreaking privacy features with its Always Encrypted capability, which enables clients to encrypt sensitive data inside application programs without revealing encryption keys to the database engine. This approach, combined with Dynamic Data Masking and Row-Level Security, provides a comprehensive privacy framework that organizations can implement across their data infrastructure.</p>    <h1 class="blog-sub-title">MongoDB's Modern Approach</h1>    <p>MongoDB has embraced modern privacy requirements with its Client-Side Field Level Encryption, allowing organizations to encrypt sensitive data before it even reaches the database server. Its Queryable Encryption feature takes this a step further by enabling queries on encrypted data without decryption, representing a significant advancement in privacy-preserving database technology.</p>    <h1 class="blog-sub-title">Implementation and Management</h1>    <p>Implementing privacy-preserving features requires careful planning and ongoing management. Organizations must balance privacy requirements with performance considerations, ensuring that protection mechanisms don't unduly impact database operations. Tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> play a crucial role in this process, providing unified interfaces for managing privacy features across different database platforms and helping organizations maintain consistent privacy policies throughout their data infrastructure. This centralized management approach is particularly valuable for organizations running multiple database systems, as it provides consistent tools and workflows for privacy management across their entire database infrastructure. Navicat's support for secure connections and encrypted sessions adds another layer of protection, ensuring that database management activities themselves don't create security vulnerabilities.</p>    <h1 class="blog-sub-title">The Future of Database Privacy</h1>    <p>As privacy concerns continue to grow and regulations become more stringent, we can expect database systems to develop even more sophisticated privacy-preserving features. The trend toward integrated privacy protection, where security is built into the core database architecture rather than added as an afterthought, will likely accelerate. Organizations that embrace these advanced features, while maintaining effective management through tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, will be well-positioned to protect sensitive data from competitors as well as from malicious third parties.</p></body></html>]]></description>
</item>
<item>
<title>How Zero-ETL Databases Are Transforming Modern Data Integration</title>
<link>https://www.navicat.com/company/aboutus/blog/3291-how-zero-etl-databases-are-transforming-modern-data-integration.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <meta charset="UTF-8">    <title>How Zero-ETL Databases Are Transforming Modern Data Integration</title></head><body><b>Mar 28, 2025</b> by Robert Gravelle<br/><br/>    <p>In the world of data management, organizations have long struggled with the complexity and time-consuming nature of Extract, Transform, and Load (ETL) processes. Zero-ETL databases have emerged as a revolutionary solution to this challenge, promising to eliminate the traditional barriers between operational and analytical data systems. In this article, we'll learn how Zero-ETL databases work as well as examine the evolving role of traditional databases in modern data processing.</p>    <h1 class="blog-sub-title">Understanding Zero-ETL Databases</h1>    <p>Zero-ETL databases represent a fundamental shift in how we think about data integration. Instead of explicitly moving and transforming data between systems, these databases create direct pathways for data access and analysis. Think of it as replacing a manual assembly line with an automated production system - the end result is the same, but the process becomes seamless and immediate.</p>    <p>Major cloud providers have begun implementing Zero-ETL capabilities in their offerings. Snowflake provides native application integration, allowing direct data access without traditional ETL processes. Google BigQuery offers streamlined data integration capabilities, while Amazon Redshift has developed Zero-ETL integration with their Aurora database service. These solutions aim to make real-time analytics possible without the overhead of data movement.</p>    <h1 class="blog-sub-title">The Role of Traditional Databases</h1>    <p>Traditional databases continue to play an essential part in Zero-ETL architectures, often serving as primary data sources.</p>   <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">      <li>PostgreSQL, which offers enterprise-grade reliability and sophisticated data handling capabilities, frequently acts as a source database for Zero-ETL systems. Its advanced features enable direct integration with platforms like Snowflake and Amazon Redshift, allowing analytical queries without traditional data movement.</li>        <li>MySQL participates in Zero-ETL scenarios through native connectors and real-time change data capture systems. For example, Amazon's Aurora MySQL can share data with Redshift without explicit ETL processes, enabling immediate analysis of operational data. This integration preserves the strengths of MySQL while extending its analytical capabilities.</li>        <li>MongoDB brings its document-oriented approach to Zero-ETL architectures through features like Atlas Data Federation and change streams. These capabilities allow applications to access and analyze data directly from MongoDB without extracting it to separate analytical systems. </li>        <li>Redis, while primarily known as a high-performance cache, serves a unique role in Zero-ETL architectures: it acts as an intermediate layer that accelerates data access without requiring explicit ETL processes. </li>  </ul>    <h1 class="blog-sub-title">Benefits and Considerations</h1>    <p>The transition to Zero-ETL approaches offers significant advantages. Organizations can analyze data in real-time without waiting for ETL jobs to complete. This immediacy supports faster decision-making and more responsive business operations. The elimination of explicit ETL processes also reduces the potential for errors and decreases the maintenance burden on data teams.</p>    <p>However, implementing Zero-ETL solutions requires careful planning. Organizations must consider data consistency requirements, query performance expectations, and the specific capabilities of their chosen platforms. The role of traditional databases becomes even more critical in this context, as they must support both operational requirements and real-time analytical access.</p>    <p>Organizations using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> can manage their local and cloud database instances alongside Zero-ETL databases, creating a unified management experience across their data infrastructure.</p>    <h1 class="blog-sub-title">Looking Forward</h1>    <p>As Zero-ETL databases continue to evolve, we can expect to see even tighter integration with traditional database systems. We are also likely to see the boundaries between operational and analytical data blur at an accelerated rate. Organizations that embrace these technologies, while maintaining their expertise with traditional databases through tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, will be well-positioned to handle tomorrow's data challenges.</p></body></html>]]></description>
</item>
<item>
<title>Hybrid Transactional/Analytical Processing</title>
<link>https://www.navicat.com/company/aboutus/blog/3289-hybrid-transactional-analytical-processing.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>  <meta charset="UTF-8">  <title>Hybrid Transactional/Analytical Processing</title></head><body>  <b>Mar 20, 2025</b> by Robert Gravelle<br/><br/>    <p>In today's data-driven business landscape, organizations face the challenge of managing both day-to-day transactions and complex analytics within their database systems. Traditionally, these workloads were handled separately: Online Transaction Processing (OLTP) systems managed operational data, while Online Analytical Processing (OLAP) systems handled reporting and analysis. Hybrid Transactional/Analytical Processing (HTAP) has been gaining traction as a revolutionary approach that combines these capabilities into a unified system, enabling real-time analytics on operational data without the complexity and delays of traditional data warehousing. This blog article explores the fundamentals of HTAP architecture, examines how traditional databases have evolved to support HTAP capabilities, and discusses the role of database management tools in implementing HTAP solutions.</p>    <h1 class="blog-sub-title">Fundamentals of HTAP Architecture</h1>    <p>The fundamental principle behind HTAP is straightforward: maintain a single source of truth that can efficiently handle both transactional and analytical workloads. This approach eliminates the need for Extract, Transform, Load (ETL) processes and reduces data latency, enabling organizations to make decisions based on the most current information available. HTAP systems achieve this through sophisticated architecture that typically includes in-memory processing, columnar storage capabilities, and advanced workload management mechanisms.</p><img alt="HTAP_diagram (55K)" src="https://www.navicat.com/link/Blog/Image/2025/20250320/HTAP_diagram.png" height="436" width="830" />    <h1 class="blog-sub-title">Traditional Databases and HTAP</h1>    <p>While purpose-built HTAP databases like SAP HANA and MemSQL lead the market, traditional databases have evolved to support HTAP workloads in various capacities. MongoDB, for instance, has embraced HTAP through its aggregation pipeline and change streams features. These capabilities allow organizations to perform real-time analytics on operational data while maintaining MongoDB's core strengths in handling document-based transactions. The platform's ability to scale horizontally makes it particularly suitable for organizations dealing with large volumes of semi-structured data.</p>    <p>PostgreSQL, often praised for its extensibility, offers several paths to HTAP functionality. Through its Foreign Data Wrapper (FDW) feature, PostgreSQL can integrate with specialized analytical stores while maintaining transactional capabilities. The TimescaleDB extension transforms PostgreSQL into a powerful time-series database, enabling complex analytical queries without sacrificing transactional performance. Additionally, the Citus extension provides distributed query capabilities, allowing PostgreSQL to scale both transactional and analytical workloads across multiple nodes.</p>    <p>MySQL, particularly through its NDB Cluster technology, is well suited to HTAP. The system maintains separate nodes for transactions and analytics, with real-time replication ensuring data consistency. The InnoDB storage engine's buffer pool optimizations and support for in-memory tables further enhance analytical performance without compromising transactional integrity. MySQL's Group Replication feature allows organizations to dedicate specific nodes to analytical workloads, providing a flexible approach to HTAP implementation.</p>    <h1 class="blog-sub-title">Database Management Tools for HTAP</h1>  <p>For organizations implementing HTAP solutions using these traditional databases, tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> prove invaluable for database management and monitoring. Navicat's unified interface supports multiple database systems, making it easier to manage hybrid environments where different databases might be employed for various aspects of the HTAP architecture. Its visual query builder and data modeling tools help developers and database administrators optimize both transactional and analytical workloads.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>The future of HTAP looks promising as traditional database systems continue to evolve and incorporate more sophisticated HTAP capabilities. The growing demand for real-time analytics, coupled with advancements in hardware and software technologies, is driving innovation in this space. Organizations are increasingly recognizing that the ability to perform real-time analytics on operational data is not just a competitive advantage but a necessity in today's fast-paced business environment.</p>    <p>As we move forward, the distinction between transactional and analytical systems may continue to blur, with HTAP becoming the standard approach for database architecture. This evolution will likely be accompanied by further improvements in traditional databases' HTAP capabilities, making sophisticated real-time analytics more accessible to organizations of all sizes.</p></body></html>]]></description>
</item>
<item>
<title>Navicat 17.2: Smarter Database Management with AI Support and Enhanced Cloud Capabilities</title>
<link>https://www.navicat.com/company/aboutus/blog/3271-navicat-17-2-smarter-database-management-with-ai-support-and-enhanced-cloud-capabilities.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat 17.2: Smarter Database Management with AI Support and Enhanced Cloud Capabilities</title></head><body><b>Mar 10, 2025</b> by Robert Gravelle<br/><br/><p>Back in August of 2024, Navicat released version 17.1, which added Enhanced Query Explain and Expanded Database Connectivity. Now, version 17.2 is in Beta and is slated for release shortly. Some of the new features that we'll be talking about in today's blog include:</p>      <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">  <li>AI Assistant:<br/>  Get AI assistance directly within Navicat whenever you need it. Enable you to ask questions and receive instant answers.  </li>  <li>Integrated Snowflake support:<br/>  Navicat has now added support for Snowflake, thereby enhancing the management of your cloud-based data warehousing platform.  </li>  <li>Data Vault and Dimensional modeling methods:<br/>  Build scalable and adaptable data architectures for various applications using Relational, Dimensional, and Data Vault methodologies.  </li>  <li>Data profiling in BI workspace data source:<br/>  There's an integrated data profiling tool in Data Preview to provide a visual and comprehensive view of your data.  </li></ul><p>The rest of the blog will explore each of these in greater detail.</p><h1 class="blog-sub-title">AI Assistant</h1><p>An AI Assistant is an integrated tool that provides instant, contextual guidance and answers within a software application, leveraging artificial intelligence to help users solve problems, understand features, and improve their workflow through natural language interactions. Navicat's AI Assistant helps you write your SQL statements more efficiently. It does this by submitting your inquiries to the AI providers for processing, with responses sent exclusively back to the Navicat application installed on your local device. You can receive guidance from many of the popular AI chatbots, including ChatGPT, Google Gemini, DeepSeek, and Ollama. </p><p>Once enabled, you can access the AI Assistant from the right-hand AI pane:</p><img alt="ai_assistant (66K)" src="https://www.navicat.com/link/Blog/Image/2025/20250310/ai_assistant.jpg"/><h1 class="blog-sub-title">Integrated Snowflake Support</h1><p>Snowflake is a cloud-native data warehousing platform that allows organizations to store, process, and analyze large volumes of structured and semi-structured data. It separates storage and compute resources, enabling flexible, scalable data management across multiple cloud platforms like AWS, Azure, and Google Cloud, with built-in performance optimization and easy data sharing capabilities.</p><p>Now you can manage your Snowflake tables, views, queries and functions directly from Navicat.</p><img alt="snowflake_support (77K)" src="https://www.navicat.com/link/Blog/Image/2025/20250310/snowflake_support.jpg"/><h1 class="blog-sub-title">Data Vault and Dimensional Modeling Methods</h1><p>Data Vault and Dimensional modeling are two strategic approaches to designing data models for business intelligence. Data Vault focuses on historical tracking with flexible components like Hubs, Links, and Satellites, allowing adaptable management of complex data environments. Dimensional modeling, in contrast, structures data into fact and dimension tables, optimizing for query performance and user comprehension by providing clear, contextual insights into quantitative business data.</p><p>You'll find the new model types on the New Model dialog:</p><img alt="new_model_types (84K)" src="https://www.navicat.com/link/Blog/Image/2025/20250310/new_model_types.jpg"/><h1 class="blog-sub-title">Data Profiling in BI Workspace Data Source</h1><p>Introduced in Navicat 17, the Data Profiling tool provides a visual and comprehensive view of your data at the click of a button! Now the BI workspace's Data Preview includes the same Data Profiling abilities.</p><img alt="bi_data_profiling (77K)" src="https://www.navicat.com/link/Blog/Image/2025/20250310/bi_data_profiling.jpg"/><h1 class="blog-sub-title">Other Improvements</h1><p>In addition to the new features outlined above, Navicat 17.2 also includes a few notable improvements.  These include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Filter &amp; Sort pane in data viewer now supports text mode</li><li>Structure Synchronization includes MongoDB databases</li><li>Several other usability enhancements</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>This blog covered just some of the exciting new features to expect in Navicat 17.2. Be sure to check the <a class="default-links" href="https://www.navicat.com/en/" target="_blank">Navicat homepage</a> for updates on the release!</p></body></html>]]></description>
</item>
<item>
<title>Database Lakehouse Architecture - The Evolution of Enterprise Data Management</title>
<link>https://www.navicat.com/company/aboutus/blog/3321-database-lakehouse-architecture-the-evolution-of-enterprise-data-management.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Database Lakehouse Architecture - The Evolution of Enterprise Data Management</title></head><body><b>Mar 5, 2025</b> by Robert Gravelle<br/><br/><p>The realm of data storage has evolved dramatically over the past decade, leading organizations to seek more effective ways to manage their data assets. Database Lakehouse Architecture has emerged as an innovative solution that bridges the gap between traditional data warehouses and data lakes, combining the best aspects of both approaches. This article explores how Lakehouse Architecture works and examines the crucial role that traditional databases play in supporting these modern data platforms.</p><h1 class="blog-sub-title">Lakehouse Architecture Defined</h1><p>A Lakehouse Architecture represents a new approach to data management that merges the flexibility and cost-effectiveness of data lakes with the reliability and performance of data warehouses. At its core, a Lakehouse uses cloud object storage to maintain vast amounts of raw data in open file formats like Apache Parquet, while implementing additional layers of functionality to provide warehouse-like features such as ACID transactions, schema enforcement, and optimized query performance.</p><h1 class="blog-sub-title">The Foundation: Storage and Processing</h1><p>The foundation of a Lakehouse typically consists of cloud object storage systems that house data in open formats. These systems are enhanced by table formats like Delta Lake, Apache Hudi, or Apache Iceberg, which add crucial capabilities for managing data reliability and consistency. This combination creates a robust base layer that can handle both structured and unstructured data while maintaining the performance characteristics needed for enterprise applications.</p><h1 class="blog-sub-title">Query Engines and Processing Layer</h1><p>Above the storage layer, powerful query engines like Apache Spark and Trino provide the computational muscle needed to process and analyze data efficiently. These engines can handle everything from basic SQL queries to complex machine learning workloads, making the Lakehouse suitable for a wide range of analytical needs. Managed solutions like Databricks SQL and Snowflake further enhance these capabilities by providing optimized, enterprise-grade query processing.</p><h1 class="blog-sub-title">The Role of Traditional Databases</h1><p>While the core Lakehouse infrastructure handles large-scale data storage and processing, traditional databases play crucial supporting roles in the overall architecture. PostgreSQL, with its ACID compliance and rich feature set, often serves as the operational database for structured data that requires frequent updates and complex transactions. Its ability to handle both relational and JSON data makes it particularly valuable in modern data architectures.</p><p>MongoDB comes into play when applications need to handle semi-structured data with flexible schemas. Its document-oriented approach complements the Lakehouse by providing a repository for application-specific data storage. This makes it particularly valuable for microservices architectures that feed data into the Lakehouse.</p><p>Redis serves as a high-performance caching layer, dramatically improving data access speeds for frequently accessed information. Its in-memory architecture and support for diverse data structures make it ideal for maintaining real-time views of data that originates from the Lakehouse, enabling fast application responses while maintaining consistency within the broader ecosystem.</p><h1 class="blog-sub-title">Management and Integration</h1><p>Managing the complex Lakehouse infrastructure requires sophisticated tools, and this is where database management tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> prove invaluable. Navicat provides comprehensive support for the traditional databases involved within Lakehouse architectures, offering unified interfaces for managing PostgreSQL, MongoDB, Redis, and other databases that play crucial roles in the overall system. This integration capability helps organizations maintain consistency and efficiency across the entire data infrastructure.</p><h1 class="blog-sub-title">Future Outlook</h1><p>The Lakehouse Architecture continues to evolve, with new tools and capabilities emerging regularly. The integration of traditional databases with modern Lakehouse platforms represents a pragmatic approach to enterprise data management, combining the strengths of established database systems with the innovation of modern data platforms. As organizations continue to deal with growing data volumes and increasingly complex analytical requirements, Lakehouse Architecture, supported by traditional databases and modern management tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, provides a solid foundation for future data management needs.</p></body></html>]]></description>
</item>
<item>
<title>Building Modern Distributed Data Systems Using a Database Mesh Architecture</title>
<link>https://www.navicat.com/company/aboutus/blog/3181-building-modern-distributed-data-systems-using-a-database-mesh-architecture.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Building Modern Distributed Data Systems Using a Database Mesh Architecture</title></head><body><b>Feb 27, 2025</b> by Robert Gravelle<br/><br/><p>In today's microservices-driven world, organizations face increasing challenges in managing data across distributed systems. Database Mesh Architecture has emerged as a powerful solution to these challenges, offering a decentralized approach to data management that aligns with modern application architectures. This article explores how Database Mesh Architecture works and how to implement it using popular databases such as PostgreSQL and MongoDB.</p><h1 class="blog-sub-title">What Exactly Is Database Mesh Architecture?</h1><p>Database Mesh Architecture represents a decentralized approach to managing data infrastructure where different databases work together as a cohesive system while remaining independently operated. Unlike traditional monolithic database systems, a database mesh distributes data management across multiple specialized databases, each serving specific business domains or use cases. This approach enables organizations to maintain flexibility while ensuring data consistency and accessibility across the entire system.</p><h1 class="blog-sub-title">Core Principles and Components</h1><p>At its heart, Database Mesh Architecture operates on the principle of domain-oriented data ownership. Each business domain maintains control over its data and database choices, enabling teams to make independent decisions about data structures and management approaches. This autonomy is balanced with standardized practices that ensure system-wide coherence.</p><p>The architecture also emphasizes self-service infrastructure, where database resources can be provisioned automatically according to predefined standards. This automation reduces operational overhead while maintaining consistent security and performance standards across the mesh.</p><p>An essential component is the interoperability layer, which enables seamless communication between different database systems. This layer handles standardized data access protocols, implements consistent security policies, and manages metadata across the entire mesh. Through this layer, different database systems can work together effectively while maintaining their specialized roles.</p><h1 class="blog-sub-title">Implementing a Database Mesh with Popular Databases</h1><p>A successful database mesh implementation combines various database types to serve different needs:</p> <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li><strong>PostgreSQL</strong> often serves as the foundation for transactional data, offering strong ACID compliance, sophisticated partitioning capabilities, and advanced replication features. Moreover, its many extensions makes it particularly valuable in a mesh architecture, where flexibility and extensibility are crucial.</li><li>For document-oriented data, <strong>MongoDB</strong> provides excellent capabilities with its flexible schema design and horizontal scaling features. Its native support for JSON documents and built-in sharding capabilities make it ideal for handling varied and evolving data structures within the mesh.</li><li>High-performance caching requirements are typically addressed using <strong>Redis</strong>, which excels at in-memory data storage and real-time operations. Its pub/sub capabilities and cluster mode for scaling make it an excellent choice for managing fast-changing data within the mesh.</li><li>Search functionality is often implemented using <strong>Elasticsearch</strong>, which provides powerful full-text search capabilities along with analytics features. Its distributed architecture naturally aligns with the mesh concept, enabling efficient data processing across the system.</li></ul><h1 class="blog-sub-title">Tips For Implementation and Management</h1><p>When implementing a database mesh, organizations should start with a modest scope, focusing on a few well-defined domains before expanding. This approach allows teams to validate patterns and practices before scaling the architecture. Standardization plays a crucial role in successful implementation, particularly in areas of naming conventions, security practices, and data ownership concerns.</p><p>Continuous monitoring and optimization are essential for maintaining mesh performance. Teams should track key metrics, monitor data consistency, and regularly optimize based on observed usage patterns. This ongoing attention ensures the mesh remains efficient and effective as business needs evolve.</p><p>Unsurprisingly, the complexity of a database mesh requires sophisticated management tools. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> stands out by providing comprehensive support for most databases commonly used in mesh architectures. Through its interface, teams can perform visual database design, query optimization, data synchronization, and performance monitoring across different database systems. This unified management approach greatly simplifies the operation of complex mesh architectures.</p><h1 class="blog-sub-title">Conclusion</h1><p>Database Mesh Architecture represents a sophisticated approach to handling complex data requirements in distributed systems. By thoughtfully combining different database technologies and managing them with professional-grade tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, organizations can build flexible, scalable data infrastructures that meet modern business needs while maintaining manageability and performance.</p></body></html>]]></description>
</item>
<item>
<title>How Multi-Modal Databases Are Transforming Modern Data Management</title>
<link>https://www.navicat.com/company/aboutus/blog/3170-how-multi-modal-databases-are-transforming-modern-data-management.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>How Multi-Modal Databases Are Transforming Modern Data Management</title></head><body><b>Feb 20, 2025</b> by Robert Gravelle<br/><br/><p>The landscape of data storage and management is currently undergoing a dramatic transformation. As organizations deal with increasingly diverse types of data, traditional relational databases are no longer sufficient for many modern applications. Enter multi-modal databases, a powerful solution that's reshaping how we think about data storage and manipulation. This article explores how multi-modal databases are revolutionizing data management by enabling organizations to store and process multiple types of data - from traditional tables to documents, graphs, and vectors - all within a single, unified system. </p><h1 class="blog-sub-title">What Are Multi-Modal Databases?</h1><p>A multi-modal database is a database management system designed to handle multiple types of data models within a single, integrated backend. Unlike traditional relational databases that primarily work with structured data in tables, multi-modal databases can simultaneously manage different data types and structures - from documents and graphs to vectors and spatial data.</p><p>For instance, consider an e-commerce platform. It might need to store product information in a traditional tabular format, customer reviews as documents, recommendation systems as vectors, and relationship networks as graphs. A multi-modal database can handle all these requirements within a single system, eliminating the need for multiple specialized databases.</p><h1 class="blog-sub-title">The Evolution from Traditional Databases</h1><p>Traditional relational databases were designed to work with structured data. As such, they excel at handling relationships between well-defined data entities through tables and SQL queries. However, traditional databases face limitations when dealing with unstructured data like documents or images, complex relationships better represented as graphs, vector embeddings for AI/ML applications, and semi-structured data with varying attributes.</p><p>Multi-modal databases address these limitations by incorporating different data models into a unified system. Modern database platforms like MongoDB and PostgreSQL have evolved to handle multiple data models effectively.</p><h1 class="blog-sub-title">Key Features and Benefits</h1><p>Multi-modal databases offer several advantages over traditional systems:</p><p><strong>Flexibility:</strong> They can adapt to varying data requirements without needing multiple specialized databases. PostgreSQL, for example, supports traditional relational data alongside JSON documents and, more recently, vector storage for AI applications.</p><p><strong>Simplified Architecture:</strong> Organizations can reduce complexity by using a single database system instead of maintaining multiple specialized databases. This consolidation, supported by tools like Navicat, makes database management more straightforward and efficient.</p><p><strong>Improved Performance:</strong> By handling different data models natively, multi-modal databases can optimize performance for each type of data while maintaining data consistency across models.</p><p><strong>Cost Efficiency:</strong> Using a single database system instead of multiple specialized ones can significantly reduce operational costs and complexity.</p><h1 class="blog-sub-title">Real-World Applications</h1><p>The versatility of multi-modal databases makes them ideal for modern applications such as:</p><p><strong>Social Media Platforms:</strong> Storing user profiles as documents, friendship networks as graphs, and media content metadata in traditional tables.</p><p><strong>Healthcare Systems:</strong> Managing patient records as documents, medical imagery metadata in tables, and treatment relationship networks as graphs.</p><p><strong>AI-Powered Applications:</strong> Storing traditional data alongside vector embeddings for machine learning models, particularly in recommendation systems and natural language processing applications.</p><h1 class="blog-sub-title">The Role of Modern Database Tools</h1><p>Database management tools have evolved alongside these multi-modal systems. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, for instance, provides comprehensive support for both <a class="default-links" href="https://www.navicat.com/products/navicat-for-mongodb" target="_blank">MongoDB</a> and <a class="default-links" href="https://www.navicat.com/products/navicat-for-postgresql" target="_blank">PostgreSQL</a>, offering intuitive interfaces for managing different data models within these platforms. This support includes visual query builders, data modeling tools, and automation capabilities that work across different data models.</p><h1 class="blog-sub-title">Conclusion</h1><p>In this exploration of multi-modal databases, we've seen how they fundamentally differ from traditional relational databases by supporting diverse data types within a single system, from documents and graphs to vectors and spatial data. We've examined their key benefits, including increased flexibility, simplified architecture, improved performance, and cost efficiency, while exploring real-world applications across social media, healthcare, and AI-powered systems.</p><p>As organizations continue to deal with increasingly diverse data types, multi-modal databases represent a significant evolution in data management. Their ability to handle various data models efficiently, combined with support from versatile management tools like Navicat, makes them an invaluable solution for modern data challenges. Whether you're working with traditional relational data, documents, graphs, or vectors, multi-modal databases provide a unified, efficient approach to data management.</p></body></html>]]></description>
</item>
<item>
<title>PostgreSQL's Rise and the New Era of Serverless Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/3162-postgresql-s-rise-and-the-new-era-of-serverless-databases.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>PostgreSQL's Rise and the New Era of Serverless Databases</title></head><body><b>Feb 13, 2025</b> by Robert Gravelle<br/><br/><img alt="PostgreSQL's Rise header (45K)" src="https://www.navicat.com/link/Blog/Image/2025/20250213/PostgreSQL's%20Rise%20header.jpg" height="372" width="732" /><p>According to the <a class="default-links" href="https://survey.stackoverflow.co/2023/" target="_blank">2023 Stack Overflow Developer Survey</a>, PostgreSQL has achieved a significant milestone by overtaking MySQL as the most admired and desired database system among developers. This shift reflects a growing appreciation for PostgreSQL's robust feature set, reliability, and extensibility in the developer community.</p><p>This changing landscape has sparked innovation in the database-as-a-service space, particularly evident in the competition between two cutting-edge platforms: PlanetScale, built on MySQL, and Neon, powered by PostgreSQL. Both services are reimagining how developers interact with databases in the cloud era. These developments should be of interest to <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> users, as both are fully supported by Navicat's comprehensive database development and management tools.</p><p>This blog will provide a comparison of the two services and offer some tips for choosing between them.</p>    <h1 class="blog-sub-title">PlanetScale: MySQL's Modern Evolution</h1><p>PlanetScale brings MySQL into the serverless age, leveraging Vitess, the same technology that powers YouTube's database infrastructure. Its standout features include database branching (similar to Git workflows), non-blocking schema changes, and automated scaling capabilities. Developers particularly appreciate PlanetScale's deployment workflow, which allows them to create development branches, make schema changes, and deploy with confidence through automated review processes.</p><p>The platform excels in:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">  <li>Developer-friendly database branching</li>  <li>Seamless schema management</li>  <li>Proven scalability</li>  <li>Built-in connection pooling</li>  <li>Zero-downtime schema changes</li></ul><h1 class="blog-sub-title">Neon: PostgreSQL's Serverless Innovation</h1><p>Neon takes PostgreSQL's rising popularity and combines it with modern cloud architecture. It separates storage from compute, enabling true serverless scaling and instant database branching. Neon maintains full compatibility with PostgreSQL while adding cloud-native features that developers expect in modern platforms.</p><p>Key advantages include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">  <li>Full PostgreSQL feature compatibility</li>  <li>Efficient storage architecture</li>  <li>Serverless autoscaling</li>  <li>Instant branching capabilities</li>  <li>Cost-effective resource utilization</li></ul><h1 class="blog-sub-title">Choosing Between the Platforms</h1><p>The choice between PlanetScale and Neon often aligns with specific project needs and team expertise. PlanetScale is particularly attractive for teams with MySQL experience who need proven scalability and appreciate Git-like workflows. Its schema management tools and deployment safety features make it especially suitable for teams working on rapidly evolving applications.</p><p>Meanwhile, Neon appeals to developers who prefer PostgreSQL's advanced features and want to leverage them in a serverless environment. Its storage-compute separation and efficient resource utilization make it particularly cost-effective for applications with variable workloads.</p><h1 class="blog-sub-title">Conclusion</h1><p>PlanetScale and Neon represent the future of database management, offering developers powerful tools to build and scale applications without the operational overhead of traditional database management. Their emergence highlights how the database landscape is evolving to meet modern development needs, with both MySQL and PostgreSQL finding new ways to serve developers through innovative platforms.</p><p>The competition between these platforms showcases how the database market continues to evolve, with each solution bringing unique strengths to the table. As PostgreSQL's popularity continues to grow, as evidenced by the Stack Overflow survey, we can expect to see continued innovation in both services. Moreover, the availability of professional grade database management tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, which supports both PlanetScale and Neon, ensures that developers can maintain their preferred workflow regardless of their platform choice. Navicat's comprehensive toolset, combined with the innovative features of both platforms, provides developers with all of the necessary tools for building and managing modern applications.</p></body></html>]]></description>
</item>
<item>
<title>Extending PostgreSQL Data Types with Navicat 17 - Part 4</title>
<link>https://www.navicat.com/company/aboutus/blog/3145-extending-postgresql-data-types-with-navicat-17-part-4.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Extending PostgreSQL Data Types with Navicat 17 - Part 4</title></head><body><b>Jan 27, 2025</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Range Types</h1><p>It's no secret that PostgreSQL is one of the most flexible databases on the market. In fact, PostgreSQL's extensibility and rich feature set recently propelled PostgreSQL ahead of MySQL as the most admired and desired database system among developers. In this series on creating custom data types in PostgreSQL using <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a> we've explored a few options so far, including custom Domains, as well as Composite and Enumerated types. The topic of this week's blog will be Range types, which are particularly useful when you need to work with continuous intervals or ranges of values. </p><h1 class="blog-sub-title">A Quick Description of the RANGE TYPE</h1><p>Range Types in PostgreSQL provide a means for working with continuous intervals of values. Hence, a range could include all product prices between $10 and $20. These ranges let you work with any values that fall within their bounds, making it easy to check for things like scheduling conflicts or price matching. Ranges are particularly useful in databases when you need to work with continuous spans of time, numerical intervals, or any other sequential data.</p><p>For example, in a movie theater's database, you might use ranges to represent screening times, ensuring no two movies are scheduled to overlap in the same theater. Or in a hotel booking system, ranges could track room availability dates, making it easy to check for vacancy conflicts. Range types are especially valuable because PostgreSQL handles all the complex logic of comparing and manipulating these intervals, providing built-in operations to check for overlaps, containment, and intersections between ranges.</p><p>In the next section, we'll go over a couple of practical examples using <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a> and the free <a class="default-links" href="https://neon.tech/postgresql/postgresql-getting-started/postgresql-sample-database" target="_blank">DVD Rental database</a>.</p><h1 class="blog-sub-title">Defining Film Runtime Ranges </h1><p>Before considering a custom range type, we should check is one of PostgreSQL's built-in range types would accomplish what we're looking for. These include:</p><ul>  <li>int4range: Range of integer</li>  <li>int8range: Range of bigint</li>  <li>numrange: Range of numeric</li>  <li>tsrange: Range of timestamp without time zone</li>  <li>tstzrange: Range of timestamp with time zone</li>  <li>daterange: Range of date</li></ul>   <p>Although film runtimes in the DVD Rental database are stored as integers, creating our own range type makes sense when we have specific business requirements that aren't covered by built-in types. For instance, if we were tracking film runtime ranges with special validation rules:</p><pre>-- Creating a custom minutes range type with specific validationCREATE TYPE runtime_range AS RANGE (    subtype = integer,    subtype_diff = int4mi);CREATE TABLE film_runtime_categories (   category_name VARCHAR(50),   typical_runtime runtime_range,   CHECK (lower(typical_runtime) &gtl= 30 AND upper(typical_runtime) &lt;= 240));-- Adding rows to the tableINSERT INTO film_runtime_categories VALUES    ('Short Film', '[30,45]');INSERT INTO film_runtime_categories VALUES    ('Feature Film', '[75,180]');</pre><h3>Creating a Range Type in Navicat 17</h3><p>An easier way to define a custom Type is to use Navicat's GUI-based tools. You'll find them in both <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a> and <a class="default-links" href="https://www.navicat.com/products/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL 17</a>.  To access the Type tool, simply click "Others" in the main toolbar and then select "Type" from the drop-down: </p><img alt="type_menu_command (33K)" src="https://www.navicat.com/link/Blog/Image/2025/20250127/type_menu_command.jpg"/><p>That will bring up the Objects pane, where we'll see a list of existing types. To create a new one, click on the arrow next to the "New Type" item in the "Objects" toolbar and select the "Range" item from the context menu:</p><img alt="range_menu_item (31K)" src="https://www.navicat.com/link/Blog/Image/2025/20250127/range_menu_item.jpg"/><p>The Range Type designer has three tabs: General, Comment, and SQL Preview. On the General tab, the main details that we need to supply are the "Subtype" and "Subtype Diff". We'll base our type on the int4 as follows:</p><img alt="range_type_general_tab (35K)" src="https://www.navicat.com/link/Blog/Image/2025/20250127/range_type_general_tab.jpg"/><p>Before clicking the "Save" button we can take a look at the statement that Navicat will generate by clicking on the "SQL Preview" tab:</p><img alt="range_type_sql_preview (22K)" src="https://www.navicat.com/link/Blog/Image/2025/20250127/range_type_sql_preview.jpg"/><p>Notice that the Type name is "Untitled" since we haven't yet saved the definition. That is expected.</p><p>Upon clicking on the "Save" button, we are presented with a "Save As" dialog where we can give our Type a name. Let's call it "runtime_range":</p><img alt="range_type_save_as_dialog (38K)" src="https://www.navicat.com/link/Blog/Image/2025/20250127/range_type_save_as_dialog.jpg"/><p>We can now use our "runtime_range" type just like any other PostgreSQL data type. For instance, if we create the "film_runtime_categories" table that we saw in the example above, we can set the "typical_runtime" column to our custom type by selecting it from the "Object Type" drop-down(s):</p><img alt="runtime_range_type_in_table_designer (74K)" src="https://www.navicat.com/link/Blog/Image/2025/20250127/runtime_range_type_in_table_designer.jpg"/><p>We can then add our field validation on the Checks tab:</p><img alt="typical_runtime_check (42K)" src="https://www.navicat.com/link/Blog/Image/2025/20250127/typical_runtime_check.jpg"/><h1 class="blog-sub-title">Conclusion</h1><p> In today's blog, we created a Range Type using <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a>'s Type tool and created a new table that featured our custom type. In Part 5 we will conclude the series by extending the Base Type. </p></body></html>]]></description>
</item>
<item>
<title>Extending PostgreSQL Data Types with Navicat 17 - Part 3</title>
<link>https://www.navicat.com/company/aboutus/blog/3143-extending-postgresql-data-types-with-navicat-17-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Extending PostgreSQL Data Types with Navicat 17 - Part 3</title></head><body><b>Jan 17, 2025</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Enumerated Types</h1><p>In this series on creating custom data types in PostgreSQL using <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a> we've explored a couple of options so far. In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/3138-extending-postgresql-data-types-with-navicat-17-domains.html" target="_blank">part 1</a>, we learned how to create a custom Domain for the free <a class="default-links" href="https://neon.tech/postgresql/postgresql-getting-started/postgresql-sample-database" target="_blank">DVD Rental database</a>. Last week, we created a Composite Type to return complex data from a user-defined function. Today's blog will cover Enumerated Types, which limit values a set of predefined options. </p><h1 class="blog-sub-title">A Quick Overview of the ENUM TYPE</h1><p>Enumerated types (ENUMs) allow us to define a data type with a static, ordered set of values. This is useful for situations where a column must contain one of a limited set of predefined values.</p><p>Like other PostgreSQL types, the ENUM type is created using the CREATE TYPE statement.  Here's an ENUM that defines four user statuses:</p><pre>CREATE TYPE user_status AS ENUM ('active', 'inactive', 'suspended', 'pending');</pre><p>Here's another that defines movie ratings:</p><pre>CREATE TYPE movie_rating AS ENUM ('G', 'PG', 'PG-13', 'R', 'NC-17');</pre><p>Once defined, we can use our custom type in a table as follows:</p><pre>CREATE TABLE films (  film_id SERIAL PRIMARY KEY,  title VARCHAR(255),  rating movie_rating);</pre><h1 class="blog-sub-title">Creating an Enumerated Type in Navicat 17</h1><p>An easier way to define an Enumerated Type is to use Navicat's GUI-based tools. You'll find them in both <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a> and <a class="default-links" href="https://www.navicat.com/products/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL 17</a>.  To access the Type tool, simply click "Others" in the main toolbar and then select "Type" from the drop-down: </p><img alt="type_menu_command (33K)" src="https://www.navicat.com/link/Blog/Image/2025/20250117/type_menu_command.jpg"  /><p>That will bring up the Objects pane, where we'll see a list of existing types. To create a new one, click on the arrow next to the "New Type" item in the "Objects" toolbar and select the "Enum" item from the context menu:</p><img alt="enum_menu_item (38K)" src="https://www.navicat.com/link/Blog/Image/2025/20250117/enum_menu_item.jpg" /><p>That will launch the Type designer in a new tab. On the General tab there will be an empty cell in which we can enter the first Label for our Enum, i.e., "G": </p><img alt="enum_type_label (21K)" src="https://www.navicat.com/link/Blog/Image/2025/20250117/enum_type_label.jpg"/><p>We can add a new row to enter the next Label by clicking on "Add Label".  Once all the Labels have been entered, the General tab should look like this:</p><img alt="completed_type_labels (25K)" src="https://www.navicat.com/link/Blog/Image/2025/20250117/completed_type_labels.jpg" /><p>Before clicking the "Save" button we can take a look at the statement that Navicat will generate by clicking on the "SQL Preview" tab:</p><img alt="enum_type_sql_preview (19K)" src="https://www.navicat.com/link/Blog/Image/2025/20250117/enum_type_sql_preview.jpg"  /><p>Notice that the Type name is "Untitled" since we haven't yet saved the definition. That is expected.</p><p>Upon clicking on the "Save" button, we are presented with a "Save As" dialog where we can give our Type a name. Let's call it "film_rating":</p><img alt="enum_type_save_as_dialog (34K)" src="https://www.navicat.com/link/Blog/Image/2025/20250117/enum_type_save_as_dialog.jpg"  /><h1 class="blog-sub-title">Using the film_rating Type In a Table Definition</h1><p>Now we can use the "film_rating" type just like any other PostgreSQL data type. For instance, we can set a table column to our custom type. We can even change the type on an existing table provided that its data values conform to our Enum value. In fact, changing a column's type from a generic VARCHAR to the stricter ENUM is an efficient way to quickly determine if a column contains invalid values.</p><p>If we open the "film" table in the Navicat Table Designer, we can set the "rating" column to our "film_rating" type by selecting "(Type)" from the "Type" drop-down and then setting the "Object Type" to "film_rating":</p><img alt="film_table_with_enum_type (119K)" src="https://www.navicat.com/link/Blog/Image/2025/20250117/film_table_with_enum_type.jpg"/><p>Also make sure that the "Collation" field is blank.</p><p>If the column doesn't contain any invalid values, we should be able to Save the table definition without any errors or warnings.</p><p>One of the advantages to setting a column type to an ENUM is that Navicat will provide a drop-down for choosing a value:</p><img alt="adding_a_new_row_to_the_film_table (48K)" src="https://www.navicat.com/link/Blog/Image/2025/20250117/adding_a_new_row_to_the_film_table.jpg"/><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we created an Enumerated Type using <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a>'s Type tool and updated an existing table to utilize our custom type in order to constrain column values. Part 4 will proceed with the Range Type.</p></body></html>]]></description>
</item>
<item>
<title>Extending PostgreSQL Data Types with Navicat 17 - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/3141-extending-postgresql-data-types-with-navicat-17-composite-types.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Extending PostgreSQL Data Types with Navicat 17: Composite Types</title></head><body><b>Jan 3, 2025</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Composite Types</h1><p>Welcome to the second installment of this series on creating custom data types in PostgreSQL using <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a>. In <a class="default-links" href="http://navicat.com/en/company/aboutus/blog/3138-extending-postgresql-data-types-with-navicat-17-domains.html" target="_blank">part 1</a>, we learned how to create a custom Domain for the free <a class="default-links" href="https://neon.tech/postgresql/postgresql-getting-started/postgresql-sample-database" target="_blank">DVD Rental database</a>. A Domain is a user-defined data type with constraints such as NOT NULL and CHECK. In today's blog, we'll create a Composite Type to return complex data from a user-defined function. </p><h1 class="blog-sub-title">PostgreSQL Types Defined</h1><p>Types are generated using the CREATE TYPE command. It creates a Composite Type that may be used in stored procedures and functions as the data types of input parameters as well as returned values.</p><p>PostgreSQL's CREATE TYPE supports four primary variations:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Composite Types: Define composite data which combines two or more data types. allowing creation of complex, multi-field data types that can represent intricate data structures.</li><li>Enumeration Types: Defined as a fixed set of predefined, named values, restricting input to only those specific options.</li><li>Range Types: Representing continuous intervals between values, enabling sophisticated operations on contiguous data ranges like dates or numbers.</li><li>Base Types: User-defined types may be created based on existing base types like int, varchar, or numeric. While there isn't a specific "Base Type" for user-defined types, new types are essentially extensions or constraints applied to these underlying PostgreSQL base types.</li></ul><p>In the next few sections we'll explore Composite Types in more detail by creating a Type and using it in a function.</p><h1 class="blog-sub-title">The CREATE TYPE Statement</h1><p>All Types are created using the CREATE TYPE statement. Let's say that we wanted to have a function that returns several values about a film such as the film ID, title, and release_year. Here is the statement that creates a type named "film_summary":</p><pre>CREATE TYPE film_summary AS (    film_id INT4,    title VARCHAR(255),    release_year CHAR(4));</pre><h1 class="blog-sub-title">Creating a Type in Navicat 17</h1><p><a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a> and <a class="default-links" href="https://www.navicat.com/products/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL 17</a> both offer a GUI-based tools for generating types without having to know all of the exact syntax. You'll find it under "Others" in the main toolbar: </p><img alt="type_menu_command (33K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/type_menu_command.jpg" height="373" width="358" /><p>Next, we'll click on the arrow next to the "New Type" item in the "Objects" toolbar. That bring up the four different options for creating a type. Select the "Composite" item from the context menu:</p><img alt="composite_menu_item (16K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/composite_menu_item.jpg" height="163" width="307" /><p>That will bring up a grid in which we can enter the field details. Since the three fields which make up the "film_summary" Type already exist, we can bring up the "film" table in the Table Designer and copy the Type and Length data from there. Here are the three fields highlighted in <span style="color:red;">red</span>:</p><img alt="fields_in_table_designer (85K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/fields_in_table_designer.jpg" height="357" width="655" /><p>The grid will already have an empty row for the first field. Once we've entered its details, we can add a new row by clicking on "Add Member".  Here is the completed grid:</p><img alt="composite_type_fields (30K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/composite_type_fields.jpg" height="172" width="480" /><p>Before clicking the "Save" button we can take a look at the statement that Navicat will generate by clicking on the "SQL Preview" tab:</p><img alt="sql_preview_tab (22K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/sql_preview_tab.jpg" height="162" width="369" /><p>Notice that the Type name is "Untitled" since we haven't yet saved the definition. That is expected.</p><p>Let's assign the name now. Clicking on the "Save" button brings up the "Save As" dialog where we can give our Type a name of "film_summary":</p><img alt="save_as_dialog (31K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/save_as_dialog.jpg" height="323" width="488" /><h1 class="blog-sub-title">Using the film_summary Type In a Function</h1><p>Now it's time to use the "film_summary" as the return type of a function. Like the Type creation, we'll use Navicat's GUI tool to do so. To access the Function Designer, click the "Function" button on the main toolbar followed by "New Function" on the "Objects" toolbar:</p><img alt="function_buttons (23K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/function_buttons.jpg" height="185" width="296" /><p>The editor will pre-populate most of the syntax for the CREATE FUNCTION for us; we just need to supply a few details like the function name, input parameters, return type, and function body.  Here is the completed CREATE FUNCTION statement:</p><pre>CREATE FUNCTION get_film_summary (f_id INT4)  RETURNS film_summaryAS $BODY$  SELECT     film_id,    title,    release_year  FROM    film  WHERE    film_id = f_id;$BODY$  LANGUAGE SQL VOLATILE;</pre><img alt="get_film_summary_function_definition (40K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/get_film_summary_function_definition.jpg" height="319" width="422" /><p>Also be sure to set the language to "SQL".</p><p>Once we click the "Save" button, our function is ready to be used. The quickest and easiest way to try a function is to click the "Execute" button. That will bring up a prompt for use to supply a value for the "f_id" parameter:</p><img alt="input_parameter_prompt (35K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/input_parameter_prompt.jpg" height="290" width="514" /><p>The results should then appear in a new Result tab:</p><img alt="function_results (28K)" src="https://www.navicat.com/link/Blog/Image/2025/20250103/function_results.jpg" height="137" width="421" /><h1 class="blog-sub-title">Conclusion</h1><p> In today's blog, we created a Composite Type using <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a>'s Type tool and designed a function that returns our Type. Part 3 will continue with Enumeration Types. </p></body></html>]]></description>
</item>
<item>
<title>Extending PostgreSQL Data Types with Navicat 17 - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/3138-extending-postgresql-data-types-with-navicat-17-domains.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Extending PostgreSQL Data Types with Navicat 17: Domains</title></head><body><b>27 Dec, 2024</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Domains</h1><p>Storing data in proper formats ensures data integrity, prevents errors, optimizes performance, and maintains consistency across systems by enforcing validation rules and enabling efficient data management. For these reasons, top tier relational databases like PostgreSQL offer a variety of data types. In addition, PostgreSQL enables custom data type creation via the "CREATE DOMAIN" and "CREATE TYPE" statements, allowing developers to extend data types for enhanced application-specific data validation, integrity, and consistency. In today's blog, we'll learn how to create a custom Domain for the free <a class="default-links" href="https://neon.tech/postgresql/postgresql-getting-started/postgresql-sample-database" target="_blank">DVD Rental database</a> using <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a>. Part 2 will cover Types.</p><h1 class="blog-sub-title">A Quick Comparison of CREATE DOMAIN and CREATE TYPE</h1><p>While both the CREATE DOMAIN and CREATE TYPE statements may be employed to create user-defined data types, there are some key differences to be aware of:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>CREATE DOMAIN creates a user-defined data type with constraints such as NOT NULL, CHECK, etc.</li>    <li>CREATE TYPE creates a composite type used in stored procedures as the data types of returned values.</li></ul>    <h1 class="blog-sub-title">Creating An Email Domain</h1><p>Domains centralize constraint management by allowing you to define reusable validation rules across multiple tables, such as creating a standard constraint that prevents NULL values and trims whitespace for specific field types. Here's an example that creates a domain for email addresses with a validation check:</p><pre>CREATE DOMAIN email AS VARCHAR(255)CHECK (  VALUE ~ '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}$');-- Use in a table definitionCREATE TABLE customer_contacts (  customer_id INT,  contact_email email);</pre><p><a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a> and <a class="default-links" href="https://www.navicat.com/products/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL 17</a> both offer a GUI-based tools for generating domains and types without having to know all of the exact syntax. You'll find both under "Others" in the main toolbar. (Both menu items are highlighted in <span style="color:red;">red</span> below):</p><img alt="others_context_menu (45K)" src="https://www.navicat.com/link/Blog/Image/2024/20241227/others_context_menu.jpg" /><p>The Domain tool includes four tabs: General, Checks, Comment, and SQL Preview.</p><h3>General Attributes</h3><p>All domains are based on an underlying type.  In this case, it's VARCHAR. Once we select an Underlying Type Category of "Base Type", we can select "pg_catalog" and "varchar" from the two Underlying Type drop-downs. We'll also need to make sure that our VARCHAR has a Length of 255. Here is the General Tab with all of that information provided:</p><img alt="email_domain_general_tab (39K)" src="https://www.navicat.com/link/Blog/Image/2024/20241227/email_domain_general_tab.jpg"/><h3>Checks</h3><p>On the next tab, we can define one or more checks to perform when someone attempts to assign a value to our type. Our check will test the value against a RegEx (regular expression): </p><img alt="email_domain_checks_tab (21K)" src="https://www.navicat.com/link/Blog/Image/2024/20241227/email_domain_checks_tab.jpg"  /><h3>SQL Preview</h3><p>At this point we can either proceed to Save the Domain, which will execute the generated CREATE DOMAIN statement, or we can click on the SQL Preview tab to view the statement before saving:</p><img alt="email_domain_preview_and_save_as_dialog (38K)" src="https://www.navicat.com/link/Blog/Image/2024/20241227/email_domain_preview_and_save_as_dialog.jpg"/><p>Notice that the Domain name is "Untitled" since we haven't yet saved the definition. That is normal.</p><h1 class="blog-sub-title">Using the email Domain In a Table</h1><p>The best way to confirm that our "email" Domain was created is to try it in a table. The "staff" table in the "dvdrental" database includes an email field. Currently, it's storing values as a VARCHAR without any validation checks. We can change the type to our Domain by selecting the "(Domain)" option from the Type drop-down in the Table Designer and then choosing "public" and "email" for the Object Type:</p><img alt="setting_column_to_email_domain (92K)" src="https://www.navicat.com/link/Blog/Image/2024/20241227/setting_column_to_email_domain.jpg" /><p>Once we save the table, attempting to change (or add) a value which is not a valid email address will result in a constraint violation:</p><img alt="failed_check (63K)" src="https://www.navicat.com/link/Blog/Image/2024/20241227/failed_check.jpg"/><h1 class="blog-sub-title">Conclusion</h1><p>By creating a custom Domain for the free dvdrental database, we saw how domains help centralize constraint management by allowing us to define reusable validation rules. In part 2, we'll create our own type using <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 17</a>'s Type tool.</p></body></html>]]></description>
</item>
<item>
<title>Populate a MySQL 8 Table From a DAT File </title>
<link>https://www.navicat.com/company/aboutus/blog/3124-populate-a-mysql-8-table-from-a-dat-file.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Populate a MySQL 8 Table From a DAT File </title></head><body><b>Dec 20, 2024</b> by Robert Gravelle<br/><br/><p>Migrating data between heterogeneous repositories - that is to say, where the source and target databases are of different database management systems from different providers - presents several challenges. In some cases, it is possible to connect to both databases simultaneously. However, there are times that it is simply not possible. When presented with such a dilemma, database practitioners have no choice but to populate tables from a dump file. Navicat can be of great help in that process. The Import Wizard allows you to import data to tables/collections from a variety of sources, including CSV, TXT, XML, DBF and more. Moreover, you can save your settings as a profile for future use or setting automation tasks. In today's blog, we'll use the Navicat Import Wizard to migrate data from the <a class="default-links" href="https://neon.tech/postgresql/postgresql-getting-started/postgresql-sample-database" target="_blank">PostgreSQL "dvdrental" database</a> to a MySQL 8 instance using the FREE <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium-lite" target="_blank">Navicat Premium Lite 17</a>.</p><p>For this tutorial, we will populate the film table in MySQL 8 using the PostgreSQL DAT file. Here is the table definition in the Table Designer:</p><img alt="film_table_definition (96K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/film_table_definition.jpg"/><p>To launch the Import Wizard, right-click the target table in the Navicat Navigation Pane (or Ctrl-Click in macOS) and select "Import Wizard..." from the context menu:</p><img alt="import_wizard_command (78K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_command.jpg"/><p>The first screen of the wizard is where we select the source file. Note that Lite edition only supports text-based files, such as TXT, CSV, XML and JSON. Although we have a .dat file, we can select the Text file option, which encompasses .txt, .csv, and .dat formats: </p><img alt="import_wizard_data_format (48K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_data_format.jpg"/><p>On the next screen we'll choose the DAT file. There is one file for each table. The one for the film table is named "3061.dat":</p><img alt="import_wizard_open_file_dialog (152K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_open_file_dialog.jpg"/><p>Next it's time to set the delimiters. Records are delimited using the Line Feed (LF) character, while columns are separated using the TAB character. There are no quotes around text values, so be sure to remove the double quote (") character from the "Text Qualifier" text box:</p><img alt="import_wizard_delimiter (45K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_delimiter.jpg"/><p>On the next screen, you'll find a few additional options. Here, we have to uncheck the "Field Name Row" box because the DAT file does not include the field names. We'll also need to change the Date Order to Year/Month/Day ("YMD") and replace the forward slash (/) delimiter with the dash (-) as the dates we will be importing are in a YYYY-MM-DD hh:mm:ss.ms, i.e. 2013-05-26 14:50:58.951, format: </p><img alt="import_wizard_additional_options (58K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_additional_options.jpg" /><p>We have the option of choosing an existing table or create a new one. Since we selected the target table when launching the Import Wizard, it should be displayed here:</p><img alt="import_wizard_target_table (40K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_target_table.jpg" /><p>The next step is to map the source fields to those in the destination table. Here we mustn't just assume that they will line up. A quick look at an entry in the DAT file reveals that the last_update and special_features columns are reversed:</p><p style="font-family: courier new;">5 African Egg A Fast-Paced Documentary of a Pastry Chef And a Dentist who must Pursue a Forensic Psychologist in The Gulf of Mexico 2006 1 6 2.99 130 22.99 G 2013-05-26 14:50:58.951 {"Deleted Scenes"} 'african':1 'chef':11 'dentist':14 'documentari':7 'egg':2 'fast':5 'fast-pac':4 'forens':19 'gulf':23 'mexico':25 'must':16 'pace':6 'pastri':10 'psychologist':20 'pursu':17</p><p>We can right-click (or Ctrl-Click in macOS) anywhere in the dialog and select "Direct Match All" from the context menu to quickly map the field to those of the target table. However, once that is done, we must manually choose the last_update and special_features columns from the Target field drop-downs to change their order:</p><img alt="import_wizard_field_mappings (75K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_field_mappings.jpg"/><p>Note that field 13 (f13) can be safely ignored.</p><p>For the Import Mode, we can either Append or Copy the records, since the table should be empty:</p><img alt="import_wizard_import_mode (62K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_import_mode.jpg" /><p>When migrating from one database type to another, there is a strong chance of encountering data conversion errors. For that reason, it's a good practice to deselect the Advanced "Use extended insert statements" box. Doing so causes Navicat to issue separate INSERT statements for each record rather than combine multiple rows using syntax such as:</p><pre>INSERT INTO `film` VALUES   (1, 'African Egg', 'A Fast-Paced...'),   (2, 'Rumble Royale', 'A historical drama...'),   (3, 'Catherine the Great', 'A new take on...'),   etc...</pre><img alt="import_wizard_advanced_options (74K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_advanced_options.jpg" height="536" width="687" /><p>Now, it's time to hit the Start button to kick off the import process. </p><p>As expected, there were a couple of errors (3 to be exact), but 1000 of 1003 rows were added to the target table!</p><img alt="import_wizard_results (111K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_results.jpg" height="512" width="682" /><h1 class="blog-sub-title">Conclusion</h1><p>Navicat's Import Wizard can dramatically cut down the amount of time spent on migrating data between heterogeneous repositories. It supports a wide range of inputs, including CSV, TXT, XML, DBF, ODBC Data Sources and more.</p><p>Interested in giving Navicat Premium Lite 17 a try? You can download it for free <a class="default-links" href="https://www.navicat.com/download/navicat-premium-lite" target="_blank">here</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>从 DAT 文件填充 MySQL 8 表</title>
<link>https://www.navicat.com/company/aboutus/blog/3123-从-dat-文件填充-mysql-8-表.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Populate a MySQL 8 Table From a DAT File </title></head><body><b>Dec 20, 2024</b> by Robert Gravelle<br/><br/><p>Migrating data between heterogeneous repositories - that is to say, where the source and target databases are of different database management systems from different providers - presents several challenges. In some cases, it is possible to connect to both databases simultaneously. However, there are times that it is simply not possible. When presented with such a dilemma, database practitioners have no choice but to populate tables from a dump file. Navicat can be of great help in that process. The Import Wizard allows you to import data to tables/collections from a variety of sources, including CSV, TXT, XML, DBF and more. Moreover, you can save your settings as a profile for future use or setting automation tasks. In today's blog, we'll use the Navicat Import Wizard to migrate data from the <a class="default-links" href="https://neon.tech/postgresql/postgresql-getting-started/postgresql-sample-database" target="_blank">PostgreSQL "dvdrental" database</a> to a MySQL 8 instance using the FREE <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium-lite" target="_blank">Navicat Premium Lite 17</a>.</p><p>For this tutorial, we will populate the film table in MySQL 8 using the PostgreSQL DAT file. Here is the table definition in the Table Designer:</p><img alt="film_table_definition (96K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/film_table_definition.jpg"/><p>To launch the Import Wizard, right-click the target table in the Navicat Navigation Pane (or Ctrl-Click in macOS) and select "Import Wizard..." from the context menu:</p><img alt="import_wizard_command (78K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_command.jpg"/><p>The first screen of the wizard is where we select the source file. Note that Lite edition only supports text-based files, such as TXT, CSV, XML and JSON. Although we have a .dat file, we can select the Text file option, which encompasses .txt, .csv, and .dat formats: </p><img alt="import_wizard_data_format (48K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_data_format.jpg"/><p>On the next screen we'll choose the DAT file. There is one file for each table. The one for the film table is named "3061.dat":</p><img alt="import_wizard_open_file_dialog (152K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_open_file_dialog.jpg"/><p>Next it's time to set the delimiters. Records are delimited using the Line Feed (LF) character, while columns are separated using the TAB character. There are no quotes around text values, so be sure to remove the double quote (") character from the "Text Qualifier" text box:</p><img alt="import_wizard_delimiter (45K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_delimiter.jpg"/><p>On the next screen, you'll find a few additional options. Here, we have to uncheck the "Field Name Row" box because the DAT file does not include the field names. We'll also need to change the Date Order to Year/Month/Day ("YMD") and replace the forward slash (/) delimiter with the dash (-) as the dates we will be importing are in a YYYY-MM-DD hh:mm:ss.ms, i.e. 2013-05-26 14:50:58.951, format: </p><img alt="import_wizard_additional_options (58K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_additional_options.jpg" /><p>We have the option of choosing an existing table or create a new one. Since we selected the target table when launching the Import Wizard, it should be displayed here:</p><img alt="import_wizard_target_table (40K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_target_table.jpg" /><p>The next step is to map the source fields to those in the destination table. Here we mustn't just assume that they will line up. A quick look at an entry in the DAT file reveals that the last_update and special_features columns are reversed:</p><p style="font-family: courier new;">5 African Egg A Fast-Paced Documentary of a Pastry Chef And a Dentist who must Pursue a Forensic Psychologist in The Gulf of Mexico 2006 1 6 2.99 130 22.99 G 2013-05-26 14:50:58.951 {"Deleted Scenes"} 'african':1 'chef':11 'dentist':14 'documentari':7 'egg':2 'fast':5 'fast-pac':4 'forens':19 'gulf':23 'mexico':25 'must':16 'pace':6 'pastri':10 'psychologist':20 'pursu':17</p><p>We can right-click (or Ctrl-Click in macOS) anywhere in the dialog and select "Direct Match All" from the context menu to quickly map the field to those of the target table. However, once that is done, we must manually choose the last_update and special_features columns from the Target field drop-downs to change their order:</p><img alt="import_wizard_field_mappings (75K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_field_mappings.jpg"/><p>Note that field 13 (f13) can be safely ignored.</p><p>For the Import Mode, we can either Append or Copy the records, since the table should be empty:</p><img alt="import_wizard_import_mode (62K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_import_mode.jpg" /><p>When migrating from one database type to another, there is a strong chance of encountering data conversion errors. For that reason, it's a good practice to deselect the Advanced "Use extended insert statements" box. Doing so causes Navicat to issue separate INSERT statements for each record rather than combine multiple rows using syntax such as:</p><pre>INSERT INTO `film` VALUES   (1, 'African Egg', 'A Fast-Paced...'),   (2, 'Rumble Royale', 'A historical drama...'),   (3, 'Catherine the Great', 'A new take on...'),   etc...</pre><img alt="import_wizard_advanced_options (74K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_advanced_options.jpg" height="536" width="687" /><p>Now, it's time to hit the Start button to kick off the import process. </p><p>As expected, there were a couple of errors (3 to be exact), but 1000 of 1003 rows were added to the target table!</p><img alt="import_wizard_results (111K)" src="https://www.navicat.com/link/Blog/Image/2024/20241220/import_wizard_results.jpg" height="512" width="682" /><h1 class="blog-sub-title">Conclusion</h1><p>Navicat's Import Wizard can dramatically cut down the amount of time spent on migrating data between heterogeneous repositories. It supports a wide range of inputs, including CSV, TXT, XML, DBF, ODBC Data Sources and more.</p><p>Interested in giving Navicat Premium Lite 17 a try? You can download it for free <a class="default-links" href="https://www.navicat.com/download/navicat-premium-lite" target="_blank">here</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Sharing Database Objects With Team Members In Navicat 17</title>
<link>https://www.navicat.com/company/aboutus/blog/3111-sharing-database-objects-with-team-members-in-navicat-17.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Sharing Database Objects With Team Members In Navicat 17</title></head><body><b>Dec 13, 2024</b> by Robert Gravelle<br/><br/><p>Navicat's database administration and development tools have long been designed with collaboration in mind. Now, thanks to the recent launch of <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank" >Navicat On-Prem Server</a>, collaboration takes center stage, allowing us to share connection settings, queries, aggregation pipelines, snippets, model workspaces, Business Intelligence (BI) workspaces and virtual group information with team members across the globe - and all in real-time. While the last several blogs described how to share database objects using Navicat On-Prem Server, today's entry will focus on how to accomplish the same thing using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium"target="_blank">Navicat Premium 17</a>.</p><h1 class="blog-sub-title">Viewing Navicat On-Prem Server Objects In Navicat Premium 17</h1><p>When you are logged into Navicat On-Prem Server, you'll be able to see the On-Prem Server in the Navigation Pane of Navicat database administration and development tools. Shared objects behave exactly like local ones and can be viewed, edited, and deleted directly in Navicat.</p><img alt="on-prem_server_in_navigation_pane (110K)" src="https://www.navicat.com/link/Blog/Image/2024/20241213/on-prem_server_in_navigation_pane.jpg"/><p>Another way to see On-Prem Server Objects is by clicking on your user icon in the upper-right corner of the main Navicat window. Doing so brings up the Manage Cloud dialog. It shows the name of the On-Prem Server, the Host IP, usage details, the number of projects created, and more! </p><img alt="manage_cloud_dialog (95K)" src="https://www.navicat.com/link/Blog/Image/2024/20241213/manage_cloud_dialog.jpg"/><p>If we wish, we can access the On-Prem Server by clicking on the "Manage Account" link. It will open the On-Prem Server in a new browser tab.</p><h1 class="blog-sub-title">Creating a New Object</h1><p>Sharing is not limited to pre-existing objects. We can also create a new object for the project we want to associate it with.  Let's start by creating a new code snippet.</p><p>First, we'll open a query under the "DVDRental MySQL DB" project. From there, we'll create a new code snippet just as we always would. Here is a code snippet named "customer - payment table join" in Navicat Premium: </p><img alt="code_snippet (51K)" src="https://www.navicat.com/link/Blog/Image/2024/20241213/code_snippet.jpg"/><p>After we save our new code snippet, it's instantaneously shared with all project members! We can verify this by opening the On-Prem Server in a browser and clicking on "Snippets":</p><img alt="shared_code_snippet (22K)" src="https://www.navicat.com/link/Blog/Image/2024/20241213/shared_code_snippet.jpg"/><h1 class="blog-sub-title">Sharing Existing Objects</h1><p>Sharing an existing object is just as easy. For example, we can share any Navigation Pane object simply by dragging it from a database instance to the same section under the On-Prem Server, i.e. from Queries to Queries.</p><p>For other objects that don't appear in the Navigation Pane, such as Business Intelligence (BI) or Model workspaces, we can copy from the Objects pane of the local instance:</p><img alt="copy_command (50K)" src="https://www.navicat.com/link/Blog/Image/2024/20241213/copy_command.jpg" /><p>Next, we paste the copied workspace into the Objects pane after selecting a project under the On-Prem Server in the Navigation Pane:</p><img alt="paste_command (38K)" src="https://www.navicat.com/link/Blog/Image/2024/20241213/paste_command.jpg"/><p>Again, we can verify that the workspace has been shared with team members by clicking on "BI Workspaces" in Navicat On-Prem Server:</p><img alt="shared_bi_workspace (25K)" src="https://www.navicat.com/link/Blog/Image/2024/20241213/shared_bi_workspace.jpg"/><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 17</a> works with <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a> to take collaboration to a whole new level. Navicat 17 allow us to share connection settings, queries, aggregation pipelines, snippets, model workspaces, Business Intelligence (BI) workspaces and virtual group information with team members across the globe. Meanwhile, the On-Prem Server also offers a comprehensive suite of administration and development tools!</p><p>You can download Navicat Premium 17 for a <a class="default-links" href="https://www.navicat.com/download/navicat-premium" target="_blank">14-day fully functional FREE trial</a>. On-Prem Server may be downloaded <a class="default-links" href="https://www.navicat.com/en/download/navicat-on-prem-server" target="_blank">here</a>. Both are available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Creating Custom PostgreSQL Aggregates in Navicat 17</title>
<link>https://www.navicat.com/company/aboutus/blog/3087-creating-custom-postgresql-aggregates-in-navicat-17.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Creating Custom PostgreSQL Aggregates in Navicat 17</title></head><body><b>Dec 6, 2024</b> by Robert Gravelle<br/><br/><p>One of the standout features of PostgreSQL is its extensive support for user-defined functions and data types. This allows developers to create custom conversion, operator, and aggregate functions. Aggregates offer a powerful way to perform complex calculations and transformations on data, going beyond the standard SQL aggregate functions like SUM, AVG, and COUNT. Both <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL</a> and <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> make it easy to write custom functions and aggregates that integrate seamlessly with the database, thanks to their specialized graphical user interface (GUI). All we need to do is provide a few details and Navicat produces the pgSQL statement for us! In today's blog, we'll be creating an aggregate to work with the <a class="default-links" href="https://neon.tech/postgresql/postgresql-getting-started/postgresql-sample-database" target="_blank">DVD Rental database</a> that concatenates movie titles by category.</p><h1 class="blog-sub-title">About Aggregates</h1><p>Aggregates are a fundamental feature of SQL that allow you to perform calculations or transformations on a set of rows and return a single result. The most common aggregate functions are SUM, AVG, COUNT, MIN, and MAX, which allow you to quickly summarize data by calculating totals, averages, counts, minimum values, and maximum values, respectively.</p><p>However, the built-in aggregate functions provided by SQL don't always meet the specific needs of an application. This is where the ability to create custom aggregates becomes useful. Custom aggregates allow you to define your own logic for summarizing and transforming data, going beyond the standard set of SQL aggregates. The process typically involves defining a state transition function, which is called for each row to update an accumulator, as well as an optional final function that is called to produce the final aggregate result.</p><h1 class="blog-sub-title">Generating the Transition and Final Functions</h1><p>Our transition function, array_append_state(), will be called for each row to update the aggregate state.</p><p>To access Navicat's function editor, click the Function button in the main button bar and then click on "New Function" in the Objects toolbar:</p><img alt="new_function_button (110K)" src="https://www.navicat.com/link/Blog/Image/2024/20241206/new_function_button.jpg" /><p>Navicat will start us off with the main function definition. From there, we'll supply the function name, input parameters, and body:</p><pre>CREATE FUNCTION "public"."<strong>array_append_state</strong>" (<strong>current_state text[], new_value text</strong>)  RETURNS <strong>text[]</strong> AS $BODY$BEGIN  <strong>RETURN array_append(current_state, new_value);</strong>END$BODY$  LANGUAGE 'plpgsql' VOLATILE;</pre>  <img alt="array_append_state_function (58K)" src="https://www.navicat.com/link/Blog/Image/2024/20241206/array_append_state_function.jpg"/><p>When we're done, we can click Save to create the function. </p><p>Now we'll go back to the Objects tab and click on "New Function" to create the final function.</p><p>The array_to_comma_string() function will take an array of film titles and insert a comma between each element:</p><pre>CREATE FUNCTION "public"."<strong>array_to_comma_string</strong>" (<strong>state text[]</strong>)  RETURNS <strong>text</strong> AS $BODY$BEGIN  <strong>RETURN array_append(state, ', ');</strong>END$BODY$  LANGUAGE 'plpgsql' VOLATILE;</pre><img alt="array_to_comma_string_function (54K)" src="https://www.navicat.com/link/Blog/Image/2024/20241206/array_to_comma_string_function.jpg"/><h1 class="blog-sub-title">Creating the comma_concat() Aggregate Function</h1><p>We can now plug our two functions into Navicat's Aggregate Editor. We can access the editor by clicking the Others button in the main button bar and then selecting "Aggregate" from the context menu:</p><img alt="aggregate_menu_command (38K)" src="https://www.navicat.com/link/Blog/Image/2024/20241206/aggregate_menu_command.jpg"/><p>In the form, we'll set the Input type to "text", enter a State type of "text[]" and supply our State and Final functions.  Also, make sure that the Initial condition is an empty array ("{}"):</p><img alt="comma_concat_function_definition (58K)" src="https://www.navicat.com/link/Blog/Image/2024/20241206/comma_concat_function_definition.jpg" /><p>We can see the generated SQL by clicking on the Preview tab:</p><pre>CREATE AGGREGATE "public"."Untitled" (In "pg_catalog"."text")(  SFUNC = "public"."array_append_state",  STYPE = "pg_catalog"."text[]",  FINALFUNC = "public"."array_to_comma_string",  INITCOND = "{}",  PARALLEL = UNSAFE);ALTER AGGREGATE "public"."Untitled"("pg_catalog"."text") OWNER TO "postgres";</pre><p>Notice that the name of the aggregate is "Untitled". Navicat will prompt us for the name when we hit the Save button and execute the command with the name that we provide.</p><img alt="save_as_dialog (50K)" src="https://www.navicat.com/link/Blog/Image/2024/20241206/save_as_dialog.jpg"/><h1 class="blog-sub-title">Using Our Custom Aggregate</h1><p>We can now invoke our aggregate function just like any other function. Here's a query that fetches a list of  movies by category:</p><pre>SELECT     c.name AS category,    comma_concat(f.title) AS moviesFROM category cJOIN film_category fc ON c.category_id = fc.category_idJOIN film f ON fc.film_id = f.film_idGROUP BY c.nameORDER BY c.name;</pre><img alt="query_with_results (202K)" src="https://www.navicat.com/link/Blog/Image/2024/20241206/query_with_results.jpg"/><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we created a custom PostgreSQL aggregate in Navicat Premium to work with the DVD Rental database that concatenates movie titles by category.</p><p>Interested in giving Navicat Premium 17 a try? You can download it for a <a class="default-links" href="https://www.navicat.com/download/navicat-premium" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Navicat On-Prem Server: Seamless Query Development and Collaboration</title>
<link>https://www.navicat.com/company/aboutus/blog/3073-navicat-on-prem-server-seamless-query-development-and-collaboration.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat On-Prem Server: Seamless Query Development and Collaboration</title></head><body><b>Nov 22, 2024</b> by Robert Gravelle<br/><br/><p>Several recent blogs have been dedicated to Navicat's latest collaboration tool: <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a>. It's an on-premise solution for hosting a cloud environment that allows you to synchronize your connection settings, queries, aggregation pipelines, snippets, model workspaces, BI workspaces and virtual group information across all your devices. In today's blog, we'll learn how to develop queries directly in On-Prem Server, and then share them with our team in real-time.  </p><h1 class="blog-sub-title">Opening a Connection and Database</h1><p>In the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/3071-getting-started-with-navicat-on-prem-server-part-3.html" target="_blank">last blog</a> we created the "dvdrental MySQL 8" Connection and associated it to the "DVDRrental MySQL DB" project:</p><img alt="dvdrental_connection (11K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/dvdrental_connection.jpg"/><p>Clicking the connection name opens a new browser tab where we can login to the database:</p><img alt="open_connection_screen (29K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/open_connection_screen.jpg"/><p>Click the "Open Connection" button to establish a secure connection to the database instance.</p><p>On the Connection screen, you'll find a list of databases under the Connection Tree on the left-hand side:</p><img alt="db_list (29K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/db_list.jpg"/><p>Double-clicking a database opens it and expands the tree to show tables, views, functions, events, and queries:</p><img alt="dvdrental_tables (72K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/dvdrental_tables.jpg"  /><h1 class="blog-sub-title">Writing and Saving a Query</h1><p>Now we'll write a brand new query in On-Prem Server's query editor. There are a couple of options for getting there: </p><p>First, we'll click on the "Queries" item under "Connection Tree" to select it. From there, we can either:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>click on the ellipsis (...) that appears beside the item and select "New Query" from the popup menu, or </li><li>click on the "New Query" button at the top of the main Query screen.</li></ul><p>Both options are highlighted in red below:</p><img alt="new_query_command_and_button (47K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/new_query_command_and_button.jpg"  /><p>Performing either of the above actions will open a new browser tab where we can develop our query. </p><p>Here's a query that fetches all films with a rental rate of 99 cents:</p><img alt="query_definition (19K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/query_definition.jpg" /><p>To save the query, click the "Save" icon at the top of the screen. A dialog will appear where we can enter the name:</p><img alt="query_save_dialog (27K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/query_save_dialog.jpg" /><h1 class="blog-sub-title">Confirming That a Query Has Been Added to a Project</h1><p>After clicking the "Save" button, if we return to the "Connection" tab and refresh the list of queries, we should see our new query:</p><img alt="new_query_in_queries_list (22K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/new_query_in_queries_list.jpg"/><p>We should also see that the "DVDRrental MySQL DB" project now shows the queries icon:</p><img alt="dvdrentals_project_with_queries_icon (20K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/dvdrentals_project_with_queries_icon.jpg" /><p>Now all members of the "DVDRrental MySQL DB" project will have access to the "Number of movies with rental rate of 0.99" query according to their assigned user rights.</p><h1 class="blog-sub-title">Managing Project Members</h1><p>If you wish to modify a project's members, you can click the ellipsis next to the project name on the "All Projects" screen and select the "Manage Members" option from the context menu:</p><img alt="manage_members_menu_item (27K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/manage_members_menu_item.jpg"  /><p>That will present a dialog where you can select members from a list:</p><img alt="collaborate_with_dialog (28K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/collaborate_with_dialog.jpg" /><p>If the user you are looking for does not appear in the list, you can add them via the Advanced Configurations -> Organisation Account -> All Users screen. That's where you can manage all of the users within your organisation. Navicat On-Prem Server allows creating local users, or creating external users using LDAP or AD authentication. </p><img alt="add_new_user_dialog (64K)" src="https://www.navicat.com/link/Blog/Image/2024/20241122/add_new_user_dialog.jpg"/><h1 class="blog-sub-title">Conclusion</h1><p>Today's blog covered how to develop queries directly in the Navicat On-Prem Server, which we then shared with our team in real-time. </p><p>Interested in giving Navicat On-Prem Server a try? You can download it for a <a class="default-links" href="https://www.navicat.com/en/download/navicat-on-prem-server" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS (using Homebrew), and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Getting Started with Navicat On-Prem Server - Part 3</title>
<link>https://www.navicat.com/company/aboutus/blog/3071-getting-started-with-navicat-on-prem-server-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Getting Started with Navicat On-Prem Server - Part 3</title></head><body><b>Nov 18, 2024</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Adding Projects, Members, and Connections</h1><p>In a <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/2803-seamless-mysql-and-mariadb-management-with-navicat-on-prem-server.html" target="_blank">recent blog</a>, we learned about Navicat's latest collaboration tool: <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a>. It's an on-premise solution for hosting a cloud environment where you can securely store Navicat objects internally at your location. Today's blog will cover how to work with projects in Navicat On-Prem Server.  Topics covered will include how to create a project, adding members, and how to configure a connection to your database instance(s). </p><h1 class="blog-sub-title">Creating a New Project</h1><p>The first time you launch On-Prem Server, the main window will indicate that you have no projects yet. To add one, click the "+New" button and select "New Project" from the popup menu: </p><img alt="new_project_command (32K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/new_project_command.jpg"  /><p>Enter the project name in the dialog and click "Create" to add the new project.</p><p>After the dialog closes, you should see your new project under "Projects" in the main window:</p><img alt="dvdrental_project (24K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/dvdrental_project.jpg"/><h1 class="blog-sub-title">Adding Members To a Project</h1><p>Our new project will now allow us to share queries, code snippets, virtual group information, as well as model and Business Intelligence (BI) workspaces with our team members. Let's add some team members to the project now. There are a couple of ways to do that; the first is to click on the ellipsis (...) to the right of the project name. That will open a submenu with additional commands. The one we want is "Manage Members":</p><img alt="manage_members_command (13K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/manage_members_command.jpg"/><p>Clicking it opens a dialog where we can add members. </p><p>The other way to access the dialog is to click on the project name. That will show the project details. There, we can click the "Manage Members" icon:</p><img alt="manage_members_icon (19K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/manage_members_icon.jpg"/><p>When the Collaborate dialog first opens, only the project owner will appear in the list. To add members, click the "+Add Member" button at the bottom of the dialog:</p><img alt="collaborate_dialog (22K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/collaborate_dialog.jpg"/><p>That will show some additional fields to the bottom of the dialog where we can enter the member's name/email and rights. These include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Can Manage &amp; Edit: Read Objects, Write Objects, Manage Members and Rename Projects</li><li>Can Edit: Read Objects and Write Objects</li><li>Can View: Read Objects</li></ul><img alt="adding_a_member (24K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/adding_a_member.jpg" /><p>Click "Add" to append the member to the list.</p><h1 class="blog-sub-title">Preparing the Project For Object Sharing</h1><p>Before we can share queries, code snippets, virtual group information, or model workspaces and Business Intelligence (BI) workspaces with our team members, we must first create a connection to the database instance.</p><p>The first step for creating a new connection is similar to that of creating a new project. Again, we will click the "+New" button, but this time, we will choose the "New Connection" command from the popup menu:</p><img alt="new_connection_command (22K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/new_connection_command.jpg"/><p>The New Connection dialog will guide us through the process.</p><p>The first step is to select the connection type - i.e. what type of database are we connecting to:</p><img alt="connection_type (76K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/connection_type.jpg"/><p>On the next screen, we can provide the Connection Name, Project, as well as the authentication details:</p><img alt="connection_properties (44K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/connection_properties.jpg"/><p>Once all of the fields have been populated, we can test the connection by clicking the "Test Connection" button. We can then proceed to create the new connection by clicking on "New".</p><p>Once the dialog closes, we should see the new connection on the Connections screen:</p><img alt="dvdrental_connection (11K)" src="https://www.navicat.com/link/Blog/Image/2024/20241118/dvdrental_connection.jpg"/><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to create a project, adding members, and how to configure a connection to your database instance(s) in <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a>. Now that we've configured the connection to our database instance, next week we'll proceed to establish a connection to the database, and share some Navicat objects with our team members via the project that we created here today.</p></body></html>]]></description>
</item>
<item>
<title>Query Customization Using Navicat-only Syntax</title>
<link>https://www.navicat.com/company/aboutus/blog/2834-query-customization-using-navicat-only-syntax.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Query Customization Using Navicat-only Syntax</title></head><body><b>Nov 13, 2024</b> by Robert Gravelle<br/><br/><p>Available in all editions of Navicat database administration and development tools (including Navicat Premium Lite!), Code Snippets allows you to insert reusable code into your SQL statements when working in the SQL Editor. Besides gaining access to a collection of built-in snippets, you can also define your own. One of the built-in categories supplies special Navicat-only Syntax for customizing the query results tab name as well as for supplying runtime parameters. Today's blog will demonstrate how to use both snippets in your queries using the free <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/sakila-installation.html" target="_blank">MySQL Sakila Database</a> and <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 17</a>.</p><h1 class="blog-sub-title">About the Code Snippet Pane</h1><p>Located on the right-hand side of the SQL Editor, the Code Snippets Pane provides an easy way to insert reusable code into SQL statements when working in the SQL Editor. If the editor window is docked to the Navicat main window, you can click the <strong style="font-family:courier new;color:blue;">()</strong> icon in the Information pane to view the snippets library. </p><img alt="query_editor_with_code_snippets_pane (158K)" src="https://www.navicat.com/link/Blog/Image/2024/20241113/query_editor_with_code_snippets_pane.jpg" /><p>You can bring up the two code snippets that we'll be learning about today by selecting the "Navicat-only Syntax" item in the Code-Snippets drop-down menu:</p><img alt="navicat-only_syntax_drop-down_item (21K)" src="https://www.navicat.com/link/Blog/Image/2024/20241113/navicat-only_syntax_drop-down_item.jpg" /><h1 class="blog-sub-title">Customizing the Result Tab Name</h1><p>Every result set generated by a query is displayed in a separate tab below the Query Editor. Each tab is given the name "Result <i>n</i>" by default where "n" is the order in which the query was executed. For example, the first query will be named "Result 1", the second "Result 2", etc.</p><p>Clicking the "Customize Result Tab Name" snippet will insert some special Navicat-only syntax at the current cursor position in the editor:</p><img alt="customize_result_tab_name_syntax (9K)" src="https://www.navicat.com/link/Blog/Image/2024/20241113/customize_result_tab_name_syntax.jpg"/><p>Once we've replaced the "tab_name" text with the desired tab name and the "Statement..." placeholder with the SQL, executing the query will now display the results with the name that we specified:</p><img alt="custom_result_tab (91K)" src="https://www.navicat.com/link/Blog/Image/2024/20241113/custom_result_tab.jpg"/><p>There are a couple of other ways to insert a Code Snippet in the Query Editor. We can:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>drag and drop a snippet from the library into the editor, or</li><li>start typing the name of a snippet in the editor. Smart code completion will pop up a list of suggestionsfor the word completion automatically. From there, we can select the snippet from the list to insert to the codeinto the editor.<p><img alt="auto-complete (49K)" src="https://www.navicat.com/link/Blog/Image/2024/20241113/auto-complete.jpg" /></p></li></ul><h1 class="blog-sub-title">Supplying a Runtime Parameter</h1><p>One of the advantages of using a stored procedure is that you can supply one or more input parameters rather than hard-code values. Thanks to Navicat's Runtime Parameter Code Snippet, you can achieve the same result using regular SELECT statements. </p><p>Clicking the "Runtime Parameter" Snippet will insert a placeholder for the parameter at the current cursor position in the editor:</p><img alt="runtime_parameter_syntax (29K)" src="https://www.navicat.com/link/Blog/Image/2024/20241113/runtime_parameter_syntax.jpg" /><p>Now, when we execute the query, Navicat will present an input parameter dialog for us to provide the value to use: </p><img alt="input_parameter_dialog (70K)" src="https://www.navicat.com/link/Blog/Image/2024/20241113/input_parameter_dialog.jpg" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to use Navicat-only Syntax Code Snippets for customizing the query results tab name as well as for supplying runtime parameters to our queries. These were executed in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 17</a>. Interested in giving Navicat Premium 17 a try? You can download it for a <a class="default-links" href="https://www.navicat.com/download/navicat-premium" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Getting Started with Navicat On-Prem Server - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/2829-getting-started-with-navicat-on-prem-server-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Getting Started with Navicat On-Prem Server - Part 2</title></head><body><b>Nov 8, 2024</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Customizing the App Server Settings and Setting Up Notifications</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a> is the perfect on-premise cloud solution for organizations who wish to maintain complete control over their data.  By maintaining full control over your system, you can ensure 100% data privacy, while still benefiting from the convenience and features of a cloud-based solution. Navicat On-Prem Server also provides comprehensive management tools for MySQL and MariaDB, enabling efficient administration, monitoring, and management of your database instances.</p><p>Last week's blog article covered the first steps for configuring Navicat On-Prem Server on Windows 10 with a MySQL 8 Community Server instance. Today we will go over the App Server, Notification Settings, and Confirmation steps to complete the configuration process. </p><h1 class="blog-sub-title">Customizing the App Server Settings</h1><p>As mentioned in part 1, after you have installed Navicat On-Prem Server and start it for the first time, a browser will pop up and open the Welcome page of your Navicat On-Prem Server at http://<your_ip_address>:<port_number>. The host address is the host name of the system that installed Navicat On-Prem Server, and the port number is 3030 by default. Hence, the server URL will usually be "http://127.0.0.1:3030/".</p><p>We can specify our own Application Port, Web URL, and Binding IP Address at this point. You may want to specify a Binding IP Address for users to access Navicat On-Prem Server if the machine has been assigned multiple IP addresses. An address of "0.0.0.0" means all IPv4 addresses on the machine, while "::" means all IPv4 and IPv6 addresses on the machine.</p><p>We'll stick with the defaults for the purposes of this tutorial.</p><img alt="app_server (37K)"  src="https://www.navicat.com/link/Blog/Image/2024/20241108/app_server.jpg" /><p>Click Next > to continue.</p><h1 class="blog-sub-title">Set Up Notifications</h1><p>Whenever an event, such as two-step verification, security activities, new device sign-in, project invitation, or a system problem is raised while you are using it, Navicat On-Prem Server can send out notifications via SMS text message and/or email.</p><p>Supported SMS service providers include Clickatell, Twilio or Others.</p><img alt="sms_settings (66K)"  src="https://www.navicat.com/link/Blog/Image/2024/20241108/sms_settings.jpg" /><p>Email settings include the SMTP Server, Port, login details, Sending Address, and a few other options:</p><img alt="email_settings (55K)"  src="https://www.navicat.com/link/Blog/Image/2024/20241108/email_settings.jpg"/><p>Click Next > to continue.</p><h1 class="blog-sub-title">Confirmation</h1><p>The confirmation screen gives you a chance to review all of your information before finalizing the configuration process.</p><img alt="confirmation (60K)"  src="https://www.navicat.com/link/Blog/Image/2024/20241108/confirmation.jpg" /><p>Here, you can either click the Back button to change some details or Finish to proceed. Note that the initial configuration process may take a few minutes for setting up the repository database.</p><p>After the configuration has completed successfully, a login page will be displayed and you can log in Navicat On-Prem Server with the manager user account.</p><h1 class="blog-sub-title">Conclusion</h1><p>This two-part series walked us through the steps to configure Navicat On-Prem Server on Windows 10 with a MySQL 8 Community Server instance.</p><p>Interested in giving Navicat On-Prem Server a try? You can download it for a <a class="default-links" href="https://www.navicat.com/en/download/navicat-on-prem-server" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS (using Homebrew), and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Getting Started with Navicat On-Prem Server - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/2816-getting-started-with-navicat-on-prem-server-configuring-the-superuser,-on-prem-server-profile,-and-repo-server.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Getting Started with Navicat On-Prem Server: Configuring the Superuser, On-Prem Server Profile, and Repo Server</title></head><body><b>Nov 1, 2024</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Configuring the Superuser, On-Prem Server Profile, and Repository Server</h1><p>Navicat On-Prem Server is an on-premise solution for hosting a cloud environment where you can securely store Navicat objects internally at your location. It's one of two Navicat products whose goal is to foster increased collaboration amongst team members - the other being Navicat Cloud. The main difference between these two solutions is the location of shared objects: in the case of Navicat Cloud, they are stored in a central location on Navicat's servers, whereas the Navicat On-Prem Server resides on your organization's infrastructure. That being said, you can also install Navicat On-Prem Server on Amazon Linux 2 or within a Docker container. Today's blog will cover the first steps for configuring Navicat On-Prem Server on Windows 10 with a MySQL 8 Community Server instance. The next blog article will complete the series with the remaining steps.</p> <h1 class="blog-sub-title">Starting and Stopping Navicat On-Prem Server</h1><p>After the installation, Navicat On-Prem Server starts automatically. You can configure this behavior via the taskbar icon:</p><img alt="taskbar_icon (7K)" src="https://www.navicat.com/link/Blog/Image/2024/20241101/taskbar_icon.jpg" /><p>Right-clicking the icon will open the context menu. There, you can start &amp; stop Navicat On-Prem Server as well as enable or disable the Auto Start feature:</p><img alt="taskbar_menu (26K)" src="https://www.navicat.com/link/Blog/Image/2024/20241101/taskbar_menu.jpg" /><h1 class="blog-sub-title">The Welcome Page</h1><p>After you have installed Navicat On-Prem Server and start it for the first time, a browser will pop up and open the Welcome page of your Navicat On-Prem Server at <code>http://&lt;your_ip_address&gt;:&lt;port_number&gt;</code>. The host address is the host name of the system that installed Navicat On-Prem Server, and the port number is 3030 by default. Hence, the server URL will usually be "http://127.0.0.1:3030/".</p><p>On the Welcome page, we can either click the Setup On-Prem Server button to complete the basic configuration of Navicat On-Prem Server or import existing settings if we already have a Navicat On-Prem Server.</p><p>We'll enter the configuration details manually since this is our first installation. There are six sections to fill in:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Superuser</li><li>On-Prem Server Profile</li><li>Connect to Repository Server</li><li>App Server</li><li>Notification Settings</li><li>Confirmation</li></ul><p>The next several sections will cover the first three points above.</p><h1 class="blog-sub-title">Create the Superuser Account</h1><p>Superuser is a local user (Admin) account which has unlimited access to Navicat On-Prem Server functionalities. You may supply the following profile information of the superuser: Username, Password, Full Name, Email, Mobile Number,Preferred Language and Appearance. You can also upload a profile photo:</p><img alt="admin_details (48K)" src="https://www.navicat.com/link/Blog/Image/2024/20241101/admin_details.jpg" /><p>Once you've entered all of the details, click Next&gt; to continue.</p><h1 class="blog-sub-title">Set the On-Prem Server Profile</h1><p>On the next page, you can provide a few details about the On-Prem Server such as the as the On-Prem Server Name and Company Name.  You may upload a server logo image as well: </p><img alt="prem_server_details (39K)" src="https://www.navicat.com/link/Blog/Image/2024/20241101/prem_server_details.jpg" /><p>Click Next&gt; to move on to the next screen.</p><h1 class="blog-sub-title">Connect to the Repository Server</h1><p>The repository database stores all user information and Navicat objects. Supported databases include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>MySQL</li><li>MariaDB</li><li>PostgreSQL</li><li>SQL Server</li><li>Amazon RDS</li></ul><p>Ideally you should allocate a separate instance to act as the repository database. Moreover, it should not reside on a production server.</p><img alt="repo_server_details (61K)" src="https://www.navicat.com/link/Blog/Image/2024/20241101/repo_server_details.jpg"/><p>For greater security, you can use SSL authentication and specify the type of encryption cipher to use.  Many of the common cipher suites are supported, including TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, and others.</p><img alt="ssl_cipher_suites (30K)" src="https://www.navicat.com/link/Blog/Image/2024/20241101/ssl_cipher_suites.jpg" /><h1 class="blog-sub-title">Going Forward</h1><p>In Part 1 of Getting Started with Navicat On-Prem Server, we went over how to configure the Superuser, On-Prem Server Profile, and Repository Server on Windows 10 with a MySQL 8 Community Server instance. The conclusion will cover the App Server, Notification Settions, and Confirmation.</p><p>Interested in giving Navicat On-Prem Server a try? You can download it for a <a class="default-links" href="https://www.navicat.com/en/download/navicat-on-prem-server" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS (using Homebrew), and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Seamless MySQL and MariaDB Management with Navicat On-Prem Server</title>
<link>https://www.navicat.com/company/aboutus/blog/2803-seamless-mysql-and-mariadb-management-with-navicat-on-prem-server.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Seamless MySQL and MariaDB Management with Navicat On-Prem Server</title></head><body><b>Oct 28, 2024</b> by Robert Gravelle<br/><br/><p>Navicat Collaboration provides the means for your team to collaborate on a variety of database objects, including connection settings, queries, aggregation pipelines, snippets, model workspaces, BI workspaces and virtual group information. Navicat offers two options for Collaboration: Navicat Cloud and <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a>. Whereas Navicat Cloud offers a central space for your team to store Navicat objects, Navicat On-Prem Server is an on-premise solution for hosting a cloud environment where you can securely store Navicat objects internally at your location. Today's blog will describe how Navicat On-Prem Server helps foster collaboration within your team and manage MySQL and MariaDB instances more effectively.</p><h1 class="blog-sub-title">Navicat On-Prem Server At a Glance</h1><p>Navicat On-Prem Server is an on-premise solution that lets you host a private cloud environment for managing Navicat files internally, enabling distributed teams to collaborate in real time, share data, coordinate tasks, and communicate seamlessly through a centralized platform.</p><p>With Navicat On-Prem Server, you retain full control over your system and guarantee complete data privacy, while still enjoying the convenience and features of a cloud-based solution. It also provides powerful management tools for MySQL and MariaDB databases, allowing you to easily administer, monitor, and manage your databases efficiently.</p><img alt="Desktop_On-Prem.png" src="https://navicat.com/images/Desktop_On-Prem.png" width="800" /><h1 class="blog-sub-title">Collaboration Tools</h1><p>Using Navicat On-Prem Server, you can synchronize your connection settings, queries, aggregation pipelines, snippets, model workspaces, BI workspaces, and virtual group information across all your devices. You can also invite your colleagues to join the project, enabling them to create and edit files collaboratively so you can work together in real-time from anywhere across the globe.</p><p>Using the Object Filter, you can locate the specific objects you need quickly through vast amounts of content, ensuring you can always find the objects that matter most, in a timely fashion.</p><p>You can assign a role to coworkers for each project they work on and grant them access to projects based on their business functions. Each role determines whether a team member can create, view, and modify project files.</p><img alt="Screenshot_Navicat_On-Prem_Server_Team_Member.png" src="https://www.navicat.com/images/product_screenshot/Screenshot_Navicat_On-Prem_Server_Team_Member.png" width="800" /><p>All collaborative activities are tracked in real time within the Activity Log, allowing you to easily monitor everything happening in your project. You can also review the recent actions of specific team members, giving everyone a clear view of what others are working on.</p><img alt="" src="https://www.navicat.com/images/product_screenshot/Screenshot_Navicat_On-Prem_Server_Activity_Log.png" width="800" /><h1 class="blog-sub-title">MySQL and MariaDB Management</h1><p>Navicat On-Prem Server comes with a comprehensive set of tools for efficiently managing core database objects, allowing you to:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"> <li>administer users, permissions, and privileges</li>  <li>browse and manage tables, views, events, functions, and stored procedures</li>  <li>write, execute, and analyze database queries via the built-in SQL editor</li> </ul><p>MySQL and MariaDB connections are managed through an intuitive, user-friendly interface. Its simple design allows you to perform essential database operations with ease. Advanced search and filtering tools help you quickly find and work with specific database objects.</p><p>The clear and responsive interface breaks down query writing into structured tabs - perfect for managing a variety of database objects, including tables, views, events and functions. </p><p>Navicat On-Prem Server features both a grid-style interface and detailed form view for viewing, updating, and deleting records. There are also specialized cell editors for text, hexadecimal, images, and web content. There are also advanced sorting and filtering options to further enhance your data management capabilities. </p><p>Many of the same features found in Navicat development and administration tools are also included in Navicat On-Prem Server to help accelerate the coding process. These include syntax highlighting, query explain, query result pinning, along with SQL stats such as query execution time.</p><h1 class="blog-sub-title">Conclusion</h1><p>This blog described how Navicat On-Prem Server helps foster collaboration within your team as well as more effectively manage MySQL and MariaDB instances. Some of the other features that you'll find in <a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server" target="_blank">Navicat On-Prem Server</a> include SMS and email notification, LDAP/AD, web-based interface, link sharing, firewall security and more! </p><p>Interested in giving Navicat On-Prem Server a try? You can download it for a <a class="default-links" href="https://www.navicat.com/en/download/navicat-on-prem-server" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS (using Homebrew), and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>The SQL Anti Join</title>
<link>https://www.navicat.com/company/aboutus/blog/2785-the-sql-anti-join.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The SQL Anti Join</title></head><body><b>Oct 21, 2024</b> by Robert Gravelle<br/><br/><p>One of the most powerful SQL features is the JOIN operation, providing an elegant and simple means of combining every row from one table with every row from another table. However, there are times that we may want to find values from one table that are NOT present in another table. As we'll see in today's blog article, joins can be utilized for this purpose as well, by including a predicate on which to join the tables. Known as anti joins, these can be helpful in answering a variety of business-related questions, such as:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>Which customers did not place an order?</li>    <li>Which employees have not been assigned a department?</li>    <li>Which salespeople did not close a deal this week?</li></ul><p>This blog will offer a primer on the types of anti joins and how to write them using a few examples based on the PostgreSQL <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">dvdrental database</a>. We'll write and execute the queries in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium-lite" target="_blank">Navicat Premium Lite 17</a>.</p> <h1 class="blog-sub-title">Two Types of Anti Joins</h1><p>There are two types of anti joins:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>left anti join: returns rows in the left table that have no matching rows in the right table</li><li>right anti join: returns rows in the right table that have no matching rows in the left table</li></ul><p>Returned rows are shown in <span style="font-weight:bold; color:rgb(19, 96, 254);">blue</span> in the diagram below:</p><img alt="anti-join_venn_diagram (56K)" src="https://www.navicat.com/link/Blog/Image/2024/20241021/anti-join_venn_diagram.jpg"  /><p>The next section will walk through a few different syntaxes we can use to create an anti join, using a left anti join for examples.</p><h1 class="blog-sub-title">Left Anti Join Using EXISTS</h1><p>Let's say that we wanted to find all the actors in the dvdrental database that didn't appear in any film. Unfortunately, SQL doesn't have a built-in syntax for this operation, but we can emulate it using EXISTS, or, more specifically, NOT EXISTS. Here's what that query would look like: </p><pre>SELECT *FROM actor aWHERE NOT EXISTS (  SELECT * FROM film_actor fa  WHERE a.actor_id = fa.actor_id)</pre><p>If we run it in Navicat Premium Lite 17, we get the following results: </p><img alt="left_anti-join (85K)" src="https://www.navicat.com/link/Blog/Image/2024/20241021/left_anti-join.jpg"/><h1 class="blog-sub-title">Beware of NOT IN!</h1><p>Since EXISTS and IN are equivalent, you might be tempted to conclude that NOT EXISTS and NOT IN are also equivalent, but this is not always the case! They are only equivalent if the right table (in this instance, film_actor) has a NOT NULL constraint on the foreign key (the actor_id).</p><img alt="film_actor_table_design (82K)" src="https://www.navicat.com/link/Blog/Image/2024/20241021/film_actor_table_design.jpg"/><p>In this specific instance, the NOT IN query returns the same results because of the NOT NULL constraint on the actor_id column:</p><img alt="left_anti-join_using_not_in (78K)" src="https://www.navicat.com/link/Blog/Image/2024/20241021/left_anti-join_using_not_in.jpg"/><p>If the actor_id column did allow nulls, an empty result set would be returned. We can verify this via the following query:</p><pre>SELECT *FROM actorWHERE actor_id NOT IN (1, 2, 3, 4, 5, NULL)</pre><img alt="no_results_using_not_in (57K)" src="https://www.navicat.com/link/Blog/Image/2024/20241021/no_results_using_not_in.jpg"/><p>The above query doesn't return any rows because NULL represents an UNKNOWN value in SQL. Since we cannot be sure whether actor_id is in a set of values of which one value is UNKNOWN, the whole predicate becomes UNKNOWN!</p><p>The easiest way to avoid the danger posed by the NOT IN syntax is to stick with NOT EXISTS. It's really not even worth gambling on the presence of a NOT NULL constraint as the DBA might temporarily turn off the constraint to load some data, rendering your query useless in the interim.</p><h1 class="blog-sub-title">Alternate Syntax</h1><p>As alluded to in the introduction, it's also possible to perform an Anti Join using LEFT and RIGHT JOINs. For that to work, you need to add a WHERE clause with the IS NULL predicate.  Here's the LEFT JOIN version of that syntax:</p><pre>SELECT a.*FROM actor a  LEFT JOIN film_actor fa  ON a.actor_id = fa.actor_idWHERE fa.actor_id IS NULL</pre><img alt="left_anti-join_using_left_join (80K)" src="https://www.navicat.com/link/Blog/Image/2024/20241021/left_anti-join_using_left_join.jpg"/><p>Just be aware that the LEFT/RIGHT JOIN syntax may run more slowly because the query optimizer doesn't recognize this as an ANTI JOIN operation.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to emulate a Left Anti Join using three variations of SQL syntax. Of these, NOT EXISTS should be your first choice as it best communicates the intent of an ANTI JOIN and tends to execute the fastest.</p><p>Interested in giving Navicat Premium Lite 17 a try? You can download it for a <a class="default-links" href="https://www.navicat.com/download/navicat-premium-lite" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>The SQL Semi Join</title>
<link>https://www.navicat.com/company/aboutus/blog/2783-the-sql-semi-join.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The SQL Semi Join</title></head><body><b>Oct 15, 2024</b> by Robert Gravelle<br/><br/><p>Most database developers and administrators are familiar with the standard inner, outer, left, and right JOIN types. While these can be written using ANSI SQL, there are other types of joins that are based on relational algebra operators that don't have a syntax representation in SQL. Today we'll be looking at one such join type: the Semi Join. Next week we'll tackle the similar Anti Join. To gain a better understanding of how these types of joins work, we'll execute some SELECT queries in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium-lite" target="_blank">Navicat Premium Lite 17</a> against the PostgreSQL <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">dvdrental database</a>. It's a free database that's based on the MySQL Sakila Sample Database.</p><h1 class="blog-sub-title">Semi Joins Explained</h1><p>Imagine for a moment that ANSI SQL did support Semi Joins. If it did, the syntax would probably be similar to that  of the Cloudera Impala syntax extension, which is LEFT SEMI JOIN and RIGHT SEMI JOIN. With that in mind, here's what a query that utilizes a Semi Join might look like: </p><pre>SELECT *FROM actorLEFT SEMI JOIN film_actor USING (actor_id)</pre><p>The above query would return all actors that played in films. That catch is that we don't want any films in the results, nor do we want multiple rows of the same actor. We only want each actor once (or zero times) in the result. The word "Semi" originates from Latin and translates to "half" in English. Hence, our query implements only "half the join", in this case, the left half. In SQL, there are two alternative syntaxes that we can use to accomplish a Semi Join: EXISTS and IN. </p><h1 class="blog-sub-title">Semi Joins Using EXISTS</h1><p>Here is the equivalent of the Semi Join using EXISTS:</p><pre>SELECT *FROM actor aWHERE EXISTS (  SELECT *   FROM film_actor fa  WHERE a.actor_id = fa.actor_id)</pre><p>If we execute our query in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium-lite" target="_blank">Navicat Premium Lite 17</a>, we can see that it works just as expected:</p><img alt="semi_join_exists (147K)" src="https://www.navicat.com/link/Blog/Image/2024/20241015/semi_join_exists.jpg"/><p>Rather than use a join, the EXISTS operator checks for the presence of one or more rows for each actor in the film_actor table. Thanks to the WHERE clause, most databases will be able to recognize that we're performing a SEMI JOIN rather than an ordinary EXISTS() predicate. </p><h1 class="blog-sub-title">Semi Joins Using IN</h1><p>IN and EXISTS are exactly equivalent SEMI JOIN emulations, so the following query will produce the exact same results in most databases as the previous EXISTS query: </p><pre>SELECT *FROM actorWHERE actor_id IN (  SELECT actor_id FROM film_actor)</pre><p>Here again is the above query and results in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium-lite" target="_blank">Navicat Premium Lite 17</a>:</p><img alt="semi_join_in (157K)" src="https://www.navicat.com/link/Blog/Image/2024/20241015/semi_join_in.jpg" /><p>EXISTS is considered to be the more powerful (albeit a bit more verbose) syntax. </p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to emulate a Semi Join using ANSI SQL syntax. In addition to being the optimal solution in terms of "correctness", there are also some performance benefits when using a "SEMI" JOIN rather than an INNER JOIN, as the database can stop looking for matches as soon as it found the first.</p><p>Interested in giving Navicat Premium Lite 17 a try? You can download it for a <a class="default-links" href="https://www.navicat.com/download/navicat-premium-lite" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Filtering Aggregated Fields Using the Having Clause</title>
<link>https://www.navicat.com/company/aboutus/blog/2781-filtering-aggregated-fields-using-the-having-clause.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Filtering Aggregated Fields Using the Having Clause</title></head><body><b>Oct 8, 2024</b> by Robert Gravelle<br/><br/><p>If you have been writing SQL queries for some time, you are probably quite familiar with the WHERE clause. While it has no effect on aggregated fields, there is a way to filter records according to aggregate values, and that is by using the HAVING clause. This blog will cover how it works as well as provide a few examples on using it in SELECT queries.</p><h1 class="blog-sub-title">Aggregation and the HAVING Clause</h1><p>Aggregation is typically used in conjunction with grouping. In SQL, that's accomplished using the GROUP BY clause. Aggregation, together with grouping, allows us to glean high level insights into our data. For example, an eCommerce company might want to track sales over a given time period.</p><p>In many cases, we may not want to apply the GROUP BY clause on the entire dataset. In those instances, we can employ the GROUP BY command along with the conditional HAVING clause to filter out unwanted results. Similar to the WHERE clause, HAVING specifies one or more filter conditions, but for a group or an aggregation. As such, HAVING is always placed after the WHERE and GROUP BY clauses but before the (optional) ORDER BY clause:</p><pre>SELECT column_listFROM table_nameWHERE where_conditionsGROUP BY column_listHAVING having_conditionsORDER BY order_expression</pre><h1 class="blog-sub-title">Some Practical Examples</h1><p>To get a better idea on how HAVING works, let's run a few SELECT queries against the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a>.</p><p>Our first query lists our top movie renters, sorted in descending order, so that the person with the most rentals appears at the top. We'll use the HAVING clause to remove customers with less than three rentals in order to shorten the list somewhat:</p><pre>SELECT  c.customer_id,  c.first_name,  c.last_name,  COUNT(r.rental_id) AS total_rentalsFROM   customer AS c    LEFT JOIN rental AS r ON c.customer_id = r.customer_idGROUP BY c.customer_idHAVING total_rentals >= 3ORDER BY total_rentals DESC;</pre><p>Here is the query and the first page of results in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>:</p><img alt="top movie renters (89K)" src="https://www.navicat.com/link/Blog/Image/2024/20241008/top%20movie%20renters.jpg" height="725" width="464" /><p>Judging by those rental numbers, we could have narrowed down the list substantially more!</p><h3>Filtering Rows Using Both WHERE and HAVING</h3><p>Just as GROUP BY and ORDER BY are applied at different points in the querying process, so too are WHERE and HAVING. Hence, we can include both to filter results both before and after grouping and aggregation. For example, we can add a WHERE clause to restrict results to the first half of a given year:</p><pre>SELECT  c.customer_id,  c.first_name,  c.last_name,  COUNT(r.rental_id) AS total_rentalsFROM   customer AS c    LEFT JOIN rental AS r ON c.customer_id = r.customer_idWHERE r.rental_date BETWEEN '2005-01-01' AND '2005-06-30'GROUP BY c.customer_idHAVING total_rentals >= 3ORDER BY total_rentals DESC;</pre><p>Once again, here is the above query and the first page of results in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>:</p><img alt="top movie renters for first half of 2005 (96K)" src="https://www.navicat.com/link/Blog/Image/2024/20241008/top%20movie%20renters%20for%20first%20half%20of%202005.jpg" height="722" width="461" /><h1 class="blog-sub-title">Combining Multiple Conditions</h1><p>Just as the WHERE clause supports multiple conditions using the AND and OR keywords, so too does HAVING. For example, we could find customers whose rental numbers fall within a given range by modifying the HAVING clause to something like the following:</p><pre>HAVING total_rentals >= 3 AND total_rentals &lt;= 10</pre><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to filter grouped and aggregated fields using the HAVING clause.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Writing SELECT Queries with EXISTS</title>
<link>https://www.navicat.com/company/aboutus/blog/2778-writing-select-queries-with-exists.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Writing SELECT Queries with EXISTS</title></head><body><b>Sep 26, 2024</b> by Robert Gravelle<br/><br/><p>The SQL EXISTS operator offers us an easy way to retrieve data based on the existence (or non-existence) of some other data. More specifically, it's a logical operator that evaluates the results of a subquery and returns a boolean value indicating whether rows were returned or not. While the IN operator can be utilized for much the same purpose, there are some differences to be aware of.  Today's blog will cover how to use the EXISTS operator using a few examples as well as provide some guidance as to when to use EXISTS rather than IN.</p><h1 class="blog-sub-title">EXISTS In Action</h1><p>Although the EXISTS operator can be used in a SELECT, UPDATE, INSERT or DELETE statement, we'll stick with SELECT queries to keep things simple. As such, the syntax we will be using will closely resemble this:</p><pre>SELECT column_name(s) FROM table_nameWHERE EXISTS ( SELECT column_name(s)                FROM table_name               WHERE condition );</pre>  <p>We'll be executing our queries against a couple of PostgreSQL tables - customer and account - such as those we might find in a banking database. Here they are in Navicat for PostgreSQL's Grid View:</p><p><img alt="customer_table (29K)" src="https://www.navicat.com/link/Blog/Image/2024/20240926/customer_table.jpg" /></p><p><img alt="account_table (28K)" src="https://www.navicat.com/link/Blog/Image/2024/20240926/account_table.jpg"/></p><p>Now we can see all the customers who have an account associated with their customer_id using the following query:</p><pre>SELECT *FROM customer CWHERE EXISTS ( SELECT *               FROM account A               WHERE C.customer_id = A.customer_id );</pre>  <p>Here is the above query with the results in the Navicat Premium's Query Editor:</p><img alt="customers_with_accounts (49K)" src="https://www.navicat.com/link/Blog/Image/2024/20240926/customers_with_accounts.jpg"/><h3>Using NOT with EXISTS</h3><p>Conversely, prefacing the EXISTS operator with the NOT keyword causes the query to only select records where there is no matching row in the subquery. We can use NOT EXISTS to fetch all orphaned accounts, that is to say, accounts with no associated customer:</p><pre>SELECT *FROM account AWHERE NOT EXISTS ( SELECT *                   FROM customer C                   WHERE A.customer_id = C.customer_id );</pre>  <p>That returns the account for customer #4 since there is no customer with that ID in the customer table.</p><img alt="accounts_without_customers (47K)" src="https://www.navicat.com/link/Blog/Image/2024/20240926/accounts_without_customers.jpg" /><h1 class="blog-sub-title">Replacing EXISTS with Joins</h1><p>Queries that use the EXISTS operator can be a little slow to execute because the subquery needs to be executed for each row of the outer Query. For that reason, you should consider using joins whenever possible. In fact, we can rewrite the above EXISTS query using a LEFT JOIN:</p><pre>SELECT C.*FROM customer C  LEFT JOIN account A ON C.customer_id = A.customer_id;</pre><img alt="left_join (36K)" src="https://www.navicat.com/link/Blog/Image/2024/20240926/left_join.jpg" /><h1 class="blog-sub-title">IN vs EXISTS Operators</h1><p>Although the IN operator is typically used to filter a column for a certain list of values, it can also be applied to the results of a subquery. Here's the equivalent to our first query, this time using IN rather than EXISTS:</p><pre>SELECT * FROM customer WHERE customer_id IN (SELECT customer_id FROM account);</pre> <p>Note that we can only select the column that we want to compare against, as opposed to SELECT *. Nonetheless, the IN query produces the same results:</p><img alt="in_query (43K)" src="https://www.navicat.com/link/Blog/Image/2024/20240926/in_query.jpg" /><p>With both operators being so similar, database developers are often unsure as to which to use. As a general rule, you should <strong>use the IN operator when you want to filter rows based on a specific list of values. Use EXISTS when you want to check for the existence of rows that meet certain conditions in a subquery.</strong></p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to use the EXISTS operator as well as how to decide whether to use EXISTS or IN.</p><p>Interested in giving Navicat Premium 17 a try? You can download it for a <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>The Search For a Universal SQL Syntax</title>
<link>https://www.navicat.com/company/aboutus/blog/2741-the-search-for-a-universal-sql-syntax.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The Search For a Universal SQL Syntax</title></head><body><b>Sep 12, 2024</b> by Robert Gravelle<br/><br/><p>In the mid nineties, Sun Microsystems came out with a language that you could "write once, [and] run everywhere." That language was, of course, Java. And, while it did go on to be one of the most popular programming languages until this day, their slogan turned out to be just a little optimistic. The course of the Java language does bear some strong similarities to that of SQL. It too can be ported from one database to another, or even across operating systems, with little or no modification. At least, that's the dream. In the real world, production-level code tends to require some tweaking in order to work in a new environment. This blog will outline some of the reasons that SQL syntax may differ across different database vendors. </p><h1 class="blog-sub-title">The ANSI SQL Specification</h1><p>ANSI, which stands for American National Standards Institute, defines the basic set of syntax rules and commands that are to be used to interact with relational databases. However, much like browser implementations of HTML, CSS, and ECMAScript, most database implementations of SQL are imperfect and/or incomplete. ANSI SQL allows for some flexibility in the level of conformance, so there is no strict vendor requirement to implement the full specification. But even at the basic, lowest level, all vendors diverge at least a little bit.</p><p>Beyond that, there are non-standard extensions, which all vendors support in one form or another. Even something as simple as indexes are non-standard. The ANSI SQL specification says nothing about indexes, so every vendor's implementation of indexing is a supplement to the standard. That opens the door for vendors to come up with whatever syntax they deem fit or most advantageous to their brand. The result: a variety of SQL dialects, which are largely the same, but with some distinctions.</p><h1 class="blog-sub-title">Writing Versatile SQL</h1><p>If you want SQL code that will work across all database types, you should stick to standard SQL statements like SELECT, WHERE, GROUP BY, ORDER BY, etc. Aggregate functions like SUM(), AVG(), MIN(), and MAX() will also be understood by all popular database types, including SQL Server, MySQL, PostgreSQL, SQLite, and Oracle. Here's a query that should work with any database:</p><pre>Select        c.customer_id,    c.customer_name,    SUM(p.amount) AS total_salesFROM customers AS c    LEFT JOIN purchases AS p    ON c.customers_id = p.customer_idWHERE    c.customer_location = 'Canada'GROUP BY    c.customer_name ASC;</pre><h1 class="blog-sub-title">Learning SQL</h1><p>If you're just starting out in database administration and/or development, you should concentrate on SQL that will apply to the most database types as possible. You should also work with a database that is highly ANSI SQL compliant and popular, such as MySQL. It has consistently been the most popular database for the past few dacades. It's also highly compliant, making it an excellent learning tool. There are many articles on it and most SQL samples were developed and run on MySQL. Microsoft SQL Server comes in at a close second. However, it uses Microsoft's dialect of SQL, called  T-SQL. Having the most dissimilar SQL to other platforms makes SQL Server a less-than-ideal starter database. You're probably better off choosing PostgreSQL or SQLite, which are also quite popular and ANSI compliant. SQLite is particularly attractive to novices because of it's small size and portability.</p><p>Here are just some of the differences that you're likely to find between databases:</p><h3>Case Sensitivity</h3><p>Consider the clause <code>WHERE name = 'Rob' Or WHERE name = 'rob'</code>:</p><table border=1 cellspacing=2 width=500><tr><th>MySQL</th><th>PostgreSQL</th><th>SQLite</th><th>SQL Server</th></tr><tr><td>Equivalent</td><td>Not Equivalent</td><td>Not Equivalent</td><td>Not Equivalent</td></tr></table><h3>Use of Quotation Marks</h3><p>Some databases only support single quotes, while others allow both single and double quotes:</p><table border=1 cellspacing=2 width=500><tr><th>MySQL</th><th>PostgreSQL</th><th>SQLite</th><th>SQL Server</th></tr><tr><td>Both</td><td>Single Only</td><td>Both</td><td>Single Only</td></tr></table><h3>Column and Table Aliases</h3><p>MySQL, PostgreSQL, and SQLite all use the "AS" keyword to denote aliases, i.e., <code>SELECT SUM(score) AS avg_score</code>, while SQL Server employs the equals sign, i.e., <code>SELECT SUM(score) = avg_score</code>.</p><h3>Date/Time Functions</h3><p>Each database implements its own date and time functions:</p><table border=1 cellspacing=2 width=500><tr><th>MySQL</th><th>PostgreSQL</th><th>SQLite</th><th>SQL Server</th></tr><tr><td>CURDATE() CURTIME() EXTRACT()</td><td>CURRENT_DATE() CURRENT_TIME() EXTRACT()</td><td width=100>DATE('now') strftime()</td><td>GETDATE() DATEPART()</td></tr></table><h1 class="blog-sub-title">Navicat Premium: the Universal Tool</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> is the tool of choice for working with a variety of database types. Not only can it connect to multiple databases simultaneously, but its Code Snippets feature makes writing queries against your preferred database type easier than ever before. The Code Snippets feature allows you to insert reusable code into your SQL statements when working in the SQL Editor. Besides gaining access to a collection of built-in snippets for common control flow statements and functions, you can also define your own. </p><img alt="code_snippets (119K)" src="https://www.navicat.com/link/Blog/Image/2024/20240912/code_snippets.jpg"/><p>You can download Navicat 17 for a <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Creating Custom Fields In Navicat BI: Calculated Fields</title>
<link>https://www.navicat.com/company/aboutus/blog/2723-creating-custom-fields-in-navicat-bi-calculated-fields.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Creating Custom Fields In Navicat BI: Calculated Fields</title></head><body><b>Sep 6, 2024</b> by Robert Gravelle<br/><br/><p>It's a well established practice in database design and development to avoid storing any data that can be calculated or reconstructed from other fields. As a result, you may be missing some data when constructing your charts in Navicat BI. But that's not an issue, as Navicat BI provides Calculated Fields specifically for that purpose. In today's blog, we'll be using Calculated Fields to build a chart that shows the average rental times - i.e., how long a customer keeps a movie before returning it - per customer.  As with most of the articles in this series, the data will be curated from the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a>. </p> <h1 class="blog-sub-title">Fetching the Customer Rental Information</h1><p>As mentioned in previous blogs in this series, we should create the data source before designing the chart as we will need to specify the data source that the chart uses. Data sources reference tables in your connections or data in files/ODBC sources, and can select data from tables on different server types.  After creating a new data source, we can click on "New Data Source Query" to open the Query Designer. There, we can write our SQL statement directly in the editor, use the visual Query Builder, or import a query from Navicat. Here's the SELECT statement that will fetch customer info, along with the rental amount, the date that the film was rented, and the date on which it was returned:</p><img alt="customer_rental_info_data_source (111K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/customer_rental_info_data_source.jpg" height="771" width="720" /><p>Once we save the query and refresh the data, we should see all the the query fields and result set:</p><img alt="customer_rental_info_data_source_with_data (253K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/customer_rental_info_data_source_with_data.jpg" height="865" width="819" /><p>We can now use the rental_date and return_date fields to calculate the rental duration. To do that, right-click the return_date in the field list (Control-click on macOS) and select New Calculated Field... from the context menu:</p><img alt="new_calculated_field_menu_command (44K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/new_calculated_field_menu_command.jpg" height="316" width="403" /><p>In the New Calculated Field dialog, you'll find all sorts of useful functions, including Aggregate functions, Datetime functions, Logic functions, and others. We'll use the DATEDIFF() function to calculate the number of days between the rental_date and return_date fields. The function accepts a time Unit, as well as a Start and End date. We can read the description below the function list for more information. We'll pass a "D" (day) for the unit, along with the two date fields, as follows:</p><img alt="new_calculated_field_dialog (116K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/new_calculated_field_dialog.jpg" height="897" width="781" /><p>There's a preview at the bottom of the dialog that tells us that we're getting the result we want.</p><p>After clicking the OK button, we should see our new field in the field list and results:</p><img alt="customer_rental_info_data_source_with_calculated_field (156K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/customer_rental_info_data_source_with_calculated_field.jpg" height="684" width="741" /> <h1 class="blog-sub-title">Building the Average Rental Duration Per Customer Chart</h1><p>Since every customer ID is a separate data point, a scatter chart might work well. A scatter chart plots data with individual data points placed along the X and Y axes. We'll use the customer_id for the X axis and the rental_duration (Average) for the Y axis. Just drag the fields over to the X-Axis and Y-Axis fields in the chart designer, apply the Average aggregate to the rental_duration, and, presto, instant chart!</p><img alt="avg_rental_duration_per_customer_chart_in_design_mode (116K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/avg_rental_duration_per_customer_chart_in_design_mode.jpg" height="867" width="734" /><p>Here is the full chart in Present mode:</p><img alt="avg_rental_duration_per_customer_chart_in_present_mode (104K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/avg_rental_duration_per_customer_chart_in_present_mode.jpg" height="690" width="1260" /> <h1 class="blog-sub-title">Bonus: Displaying the Number of Rentals Per Customer</h1><p>While averages are helpful, it might also be useful to show how many times each customer rented one or more films. We can use an Aggregate function for this purpose. We'll count the number of amount entries in the result set and group them by customer_id. Here is the New Calculated Field dialog with the call to the AGGCOUNT() function:</p><img alt="aggcount_function (102K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/aggcount_function.jpg" height="832" width="722" /><p>In the Chart Designer, we'll drag our new number_of_rentals field to the Color slot. Adding an ascending sort will order the legend items from the lowest to highest number of rentals:</p><img alt="avg_rental_duration_per_customer_chart_in_design_mode_with_num_or_rentals (118K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/avg_rental_duration_per_customer_chart_in_design_mode_with_num_or_rentals.jpg" height="845" width="731" /><p>We can view the details of and individual data point by hovering the cursor over it. A tooltip will appear showing the number of rentals, the customer_id, as well as the average rental_duration in days:</p><img alt="data_point_details (17K)" src="https://www.navicat.com/link/Blog/Image/2024/20240906/data_point_details.jpg" height="141" width="263" /> <h1 class="blog-sub-title">Conclusion</h1><p>This blog covered how to use Calculated Fields in your Navicat BI data sources and charts. These were just one of the new features included with the latest version of Business Insight (BI). This also brings us to the end of this series on Custom Fields. If you'd like to try Navicat BI, you can download the stand-alone version for a <a class="default-links" href="https://navicat.com/download/navicat-bi" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems. </p></body></html>]]></description>
</item>
<item>
<title>Creating Custom Fields In Navicat BI: Custom Sort Orders</title>
<link>https://www.navicat.com/company/aboutus/blog/2700-creating-custom-fields-in-navicat-bi-custom-sort-orders.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Creating Custom Fields In Navicat BI: Custom Sort Orders</title></head><body><b>Aug 23, 2024</b> by Robert Gravelle<br/><br/><p>In Navicat BI, data sources reference tables in your connections or data in files/ODBC sources, and can select data from tables on different server types. The fields in the dataset can be used to construct a chart. In fact, when building a chart, you will need to specify the data source that's used to populate the chart. </p><p>As we've seen throughout this series, data sources support custom field types. These include: Type-Changed, Concatenated, Mapped, Custom-Sorted, and Calculated.  The last blog covered how to use Custom-Sorted Fields to sort chart data according to a reference field. This week, we'll be learning how to set an explicit sort order. In order to do so, we will create a Vertical Bar Chart for the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a> that shows a sum of movie rental proceeds by month. </p><h1 class="blog-sub-title">Configuring the Data Source</h1><p>As mentioned earlier, our chart will require a data source that fetches the relevant data, so let's create a new data source named "Rentals by Month".</p><p>Here's a query that I created in Navicat for PostgreSQL: </p><img alt="rentals_by_month_query (17K)" src="https://www.navicat.com/link/Blog/Image/2024/20240823/rentals_by_month_query.png" height="584" width="427" /><p>We can now import it into our data source by clicking the Import Query button:</p><img alt="rentals_by_month_data_source (141K)" src="https://www.navicat.com/link/Blog/Image/2024/20240823/rentals_by_month_data_source.jpg" height="720" width="1039" /><p>After refreshing the data, we can see the query fields and results:</p><img alt="rentals_by_month_data_source_with_data (58K)" src="https://www.navicat.com/link/Blog/Image/2024/20240823/rentals_by_month_data_source_with_data.jpg" height="695" width="482" /><h1 class="blog-sub-title">Designing the Sales by Month Chart</h1><p>Time to design our chart. First, let's see what happens when we sort by month name:</p><img alt="rentals_by_month_chart_sorted_by_month_name (77K)" src="https://www.navicat.com/link/Blog/Image/2024/20240823/rentals_by_month_chart_sorted_by_month_name.jpg" height="869" width="728" /><p>As you can see, this sorts the bars alphabetically according to the month name, and not in chronological order. To do that, we'll need to add a Custom-Sorted field to the data source by right-clicking the month (Control-click on macOS) in the field list and selecting New Custom Field -> New Custom-Sorted Field... from the context menu:</p><img alt="custom-sorted_menu_command (29K)" src="https://www.navicat.com/link/Blog/Image/2024/20240823/custom-sorted_menu_command.jpg" height="255" width="370" /><p>In the New Custom-Sorted Field dialog, we can now verify that the "Custom" radio button is selected, and proceed to move each month from the Suggested Values list into the Sorted Values using the arrow button (highlighted in red below):</p><img alt="new_custom-sorted_field_dialog (49K)" src="https://www.navicat.com/link/Blog/Image/2024/20240823/new_custom-sorted_field_dialog.jpg" height="552" width="642" /><p>If you ever make a mistake, no need to worry! You can just select the item and use the up and down arrows to change its position in the list.</p><p>Once you're satisfied with the sort order, click the OK button to close the dialog.</p><p>You should now see the new Custom-Sorted field in the query results:</p><img alt="data_source_results_with_custom_sorted_field (66K)" src="https://www.navicat.com/link/Blog/Image/2024/20240823/data_source_results_with_custom_sorted_field.jpg" height="486" width="629" /><p>Note that this will not affect the sort order in the data source, but it will once we add our new field to the chart and apply a sort to it.</p><p>If we now set the Custom-Sorted field as the chart Axis and sort it in ascending order, the bars will now follow the sort order that we assigned in the New Custom-Sorted Field dialog:</p><img alt="rentals_by_month_chart_sorted_by_month (104K)" src="https://www.navicat.com/link/Blog/Image/2024/20240823/rentals_by_month_chart_sorted_by_month.jpg" height="869" width="924" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog covered how to use Custom-Sorted Fields to sort chart data according to an explicit sort order. Next week, we'll be moving on the final custom field type of the series: Calculated Fields.</p><p>You can download Navicat BI for a <a class="default-links" href="https://navicat.com/download/navicat-bi" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems. You'll also find Navicat BI bundled with Navicat Premium and Enterprise Editions of Navicat for MySQL, Oracle, PostgreSQL, SQLite, SQL Server and MariaDB.</p></body></html>]]></description>
</item>
<item>
<title>Navicat Premium Lite: the Simple Database Management &amp; Development Tool</title>
<link>https://www.navicat.com/company/aboutus/blog/2693-navicat-premium-lite-the-simple-database-management-development-tool.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat Premium Lite: the Simple Database Management &amp; Development Tool</title></head><body><b>Aug 16, 2024</b> by Robert Gravelle<br/><br/><p>Navicat Premium has long been the choice of database professionals everywhere who needed to simultaneously connect to a variety of database platforms from a single application. Navicat Premium Lite now offers a streamlined database management experience for users who only require the core features needed for basic database operations. In today's blog, we'll go over all of the impressive features that you'll find in Navicat Premium Lite as well as where to download it for FREE.</p><h1 class="blog-sub-title">The Main Window</h1><p>In terms of viewing, updating, and deleting data, Navicat Premium Lite is virtually indistinguishable from its older brother, Navicat Premium. You can still create and modify records seamlessly in Grid View, Tree View and JSON View through the built-in editors. Only Form View is reserved for Enterprise Edition.</p><img alt="data_viewer (231K)" src="https://www.navicat.com/link/Blog/Image/2024/20240816/data_viewer.jpg" height="672" width="962" /><h1 class="blog-sub-title">Object Designer</h1><p>To help manage database objects such as tables, views, keys, and constraints, Navicat Premium Lite offers an Object Designer.  It implements a clear and responsive interface that organizes the various objects into structured tabs. Here is the field list for a PostgreSQL table:</p><img alt="object_designer (168K)" src="https://www.navicat.com/link/Blog/Image/2024/20240816/object_designer.jpg" height="672" width="962" /><p>Tabs are configured according to the underlying database, so that connecting to PostgreSQL will cause the Rules tab to be present:</p><img alt="rules_tab (160K)" src="https://www.navicat.com/link/Blog/Image/2024/20240816/rules_tab.jpg" height="689" width="970" /><h1 class="blog-sub-title">Query Editor</h1><p>Thanks to Navicat Premium Lite's Query Editor, SQL coding has never been easier. It features code completion, code snippets, and syntax highlighting, all within an interface that is both clean and intuitive. </p><img alt="query_editor (207K)" src="https://www.navicat.com/link/Blog/Image/2024/20240816/query_editor.jpg" height="679" width="1023" /><p>However, you'll need to move to the Enterprise Edition if you're looking to use the visual Query Builder of Beautify SQL tool.</p><h1 class="blog-sub-title">Import and Export</h1><p>You can import and export your data in a variety of text-based formats including TEXT (.txt), CSV, XML, and JSON. Only digital formats like Excel (.xls and .xlsx), MS Access and DBase are reserved for the Enterprise Edition.</p><img alt="export_wizard (127K)" src="https://www.navicat.com/link/Blog/Image/2024/20240816/export_wizard.jpg" height="676" width="1022" /><h1 class="blog-sub-title">Collaboration</h1><p>You'll be happy to know that Navicat's collaboration tools such as Navicat Cloud and Navicat On-Prem Server are also available in Navicat Premium Lite. You will undoubtedly find Navicat Cloud to be particularly useful; it allows you to synchronize your connection settings, queries, snippets, and virtual group information to the cloud service so you can get real-time access to them, and share them with your coworkers, where ever they may be, all around the world. </p><h1 class="blog-sub-title">Connect Securely</h1><p>You can rest easy knowing that your connections are secured via SSH Tunneling and SSL. Moreover, advanced authentication methods, which include PAM, X.509, and GSSAPI, provide multiple layers of protection against unauthorized access.</p><img alt="ssh_tunnel (76K)" src="https://www.navicat.com/link/Blog/Image/2024/20240816/ssh_tunnel.jpg" height="732" width="902" /><h1 class="blog-sub-title">Conclusion</h1><p>Navicat Premium Lite is the perfect tool for users who desire all of the core functionality that's required for most database operations. If you take a look at the <a class="default-links" href="https://www.navicat.com/products/navicat-premium-feature-matrix" target="_blank">feature matrix</a>, you'll see that all but the most advanced functionality is present in Navicat Premium Lite.</p><p>By retaining the ability to simultaneously connect to a variety of database platforms, including MySQL, Redis, PostgreSQL, SQL Server, Oracle, MariaDB, SQLite, and MongoDB, all from a single application, <a class="default-links" href="https://www.navicat.com/download/navicat-premium-lite" target="_blank">Navicat Premium Lite</a> is sure to have an enormous impact on the free database management software landscape. Navicat Premium Lite is available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Creating Custom Fields In Navicat BI: Mapped Fields Overview</title>
<link>https://www.navicat.com/company/aboutus/blog/2683-creating-custom-fields-in-navicat-bi-mapped-fields-overview.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Creating Custom Fields In Navicat BI: Mapped Fields Overview</title></head><body><b>Aug 7, 2024</b> by Robert Gravelle<br/><br/><p>Welcome to the 3rd installment in this series on Creating Custom Fields In Navicat BI. In Part 1, we learned how to add Type-Changed Fields to your Navicat BI charts.  Part 2 went on to describe how to use Concatenated Fields. Today's blog will introduce Mapped Fields. We'll be modifying the data source that we used in the last two articles, which connects to the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a> and returns a list of rentals for each film category. In the next blog, we will use the updated data source to create a chart that compares new releases to other categories.</p><h1 class="blog-sub-title">Field Mapping Overview</h1><p>In many ways, field mapping is highly similar to the process of transformation in Information Technology (IT). Whereas the latter runs a value through an algorithm to arrive at a transformed value, field mapping is simply the changing of one or more column values to another. </p><p>Field mapping can sometimes be observed in the field list clause of SELECT queries. For example, the "Sum of Payments per Movie Category" query which was the data source throughout this series returns a list of film categories along with a sum of their sales (or, more specifically, rentals). We can employ a CASE statement to make certain category names more descriptive, like say "Games" to "Video Games":</p><img alt="field_mapping_query (105K)" src="https://www.navicat.com/link/Blog/Image/2024/20240806/field_mapping_query.jpg" height="798" width="549" /><h1 class="blog-sub-title">Creating the New Releases vs. Other Categories Data Source</h1><p>Before designing any chart, we need a data source to fetch the information we require. Once you've got a few data sources, you might find it easier to repurpose an existing one rather than create a new data source from scratch. In fact, the Rentals by Category data source that we used last time will do nicely.</p><p>We can easily duplicate any item in the Navicat BI workspace by selecting and then right-clicking (or Control-click on macOS) it in the workspace, and selecting Duplicate &lt;Item Type&gt; from the context menu. Hence, the menu item that we would want is "Duplicate Data Source":</p><img alt="duplicate_menu_item (40K)" src="https://www.navicat.com/link/Blog/Image/2024/20240806/duplicate_menu_item.jpg" height="263" width="444" /><p>That will create a new data source named "Rentals by Category 1". To rename our new data source, click once on the item to select it and then a second time to activate edit mode. You can tell that the item is ready for editing when the label turns into a textbox with the item text highlighted in blue: </p><img alt="rename_data_source (56K)" src="https://www.navicat.com/link/Blog/Image/2024/20240806/rename_data_source.jpg" height="318" width="440" /><p>Let's call our new data source "New Releases vs. Other Categories". Press the Enter key to save the new name:</p><img alt="renamed_data_source (10K)" src="https://www.navicat.com/link/Blog/Image/2024/20240806/renamed_data_source.jpg" height="101" width="220" /><h1 class="blog-sub-title">Adding a Mapped Field</h1><p>To add a new Mapped Field to the data source, right-click the name field (or Control-click on macOS) and select New Mapped Field... from the context menu:</p><img alt="new_mapped_field_menu_item (41K)" src="https://www.navicat.com/link/Blog/Image/2024/20240806/new_mapped_field_menu_item.jpg" height="255" width="406" /><p>That opens the New Mapped Field dialog. There, let's begin by renaming the Target Field Name to "mapped_category_names".</p><p>Next, we'll map the "New" category name to something more descriptive. To do that:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Since the "New" category will have a one-to-one mapping to the new value, select "One-to-One" from the Mapping Method drop-down.</li><li>Choose "New" as the Source Value.</li><li>Enter "New Release" for the Mapped Value.</li></ul><p>Now we'll repeat the process for Null values, i.e., films which have not been assigned a category.</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Click the Add button and select "Add One-to-One Values..." from the context menu.</li><li>In the Add One-to-One Values dialog, select the checkbox next to the (NULL) value and enter "Uncategorized" for the Mapped Value.<p><img alt="add_one_to_one_values_dialog (79K)" src="https://www.navicat.com/link/Blog/Image/2024/20240806/add_one_to_one_values_dialog.jpg" height="579" width="664" /></p></li><li>Click the OK button to close the dialog and add the new row to the Mapped Fields table.</li></ul><p>Finally, toggle the New Value radio button next to the Other Values label and enter "Other Categories" so that all other values are assigned to this catch-all category. The dialog should look as follows at this point:</p><img alt="new_mapped_field_dialog (59K)" src="https://www.navicat.com/link/Blog/Image/2024/20240806/new_mapped_field_dialog.jpg" height="552" width="642" /><p>Click OK to close the dialog. You should now see the mapped_category_names field in the data grid:</p><img alt="data_grid_with_mapped_category_names_field (119K)" src="https://www.navicat.com/link/Blog/Image/2024/20240806/data_grid_with_mapped_category_names_field.jpg" height="565" width="770" /><p>If you wish, you may delete the other calculated fields (as shown in the above image), since they won't be needed for the chart that we will be building next week.</p><h1 class="blog-sub-title">Conclusion</h1><p>This blog covered how to use Mapped Fields in your Navicat BI data sources. It is one of five custom field types, which include: Type-Changed, Concatenated, Mapped, Custom-Sorted, and, Calculated. Next week, we will use the "New Releases vs. Other Categories" data source to create a chart that compares new releases to other categories.</p><p>You can download Navicat BI for a <a class="default-links" href="https://navicat.com/download/navicat-bi" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems. </p></body></html>]]></description>
</item>
<item>
<title>Navicat for MySQL 17: Empowering Smarter Business Decisions</title>
<link>https://www.navicat.com/company/aboutus/blog/3156-navicat-for-mysql-17-empowering-smarter-business-decisions.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat for MySQL 17: Empowering Smarter Business Decisions</title></head><body><b>Jul 26, 2024</b> by Robert Gravelle<br/><br/><p>On May 12th, Navicat added several major updates to existing products, including Navicat Premium, Navicat BI, and Navicat Data Modeler.  One of the most popular Navicat tools, Navicat for MySQL, also benefitted from the new updates, receiving much of the same exciting new features as Navicat Premium. Today's blog will be covering just a few of the improvements that you'll find in the new Navicat for MySQL 17.</p><h1 class="blog-sub-title">A Brand New Modeler!</h1><p>We learned about <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2436-introducing-navicat-data-modeler-4.html" target="_blank">Navicat Data Modeler 4</a> in a previous blog. It utilizes a single workspace that incorporates several databases, as well as models, diagrams, and other related objects. As we saw, this approach allows users to illustrate different model objects within a single diagram as well as facilitate efficient switching between models, cross-model management, and sharing of model workspaces. Navicat for MySQL 17 comes with the same functionality built-in.</p><img alt="model_workspace (94K)" src="https://www.navicat.com/link/Blog/Image/2024/20240726/model_workspace.jpg" height="485" width="798" /><p>Models support Functions and Procedures, which allows you to pre-define processes and operations during the modeling stage.</p><img alt="pr_audit_customer_table (146K)" src="https://www.navicat.com/link/Blog/Image/2024/20240726/pr_audit_customer_table.jpg" height="672" width="962" /><h3>Data Dictionary</h3><p>The Data Dictionary has become an integral document for both information systems and research projects. As the name suggest, the Data Dictionary contains names, definitions, and attributes about data elements stored within a database. We learned about the Data Dictionary as well as Navicat 17's new Data Dictionary tool in the <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2426-create-a-data-dictionary-in-navicat-17.html" target="_blank">Create a Data Dictionary in Navicat 17</a> blog article.  Accessible from either the Model or main Navicat window, the Data Dictionary tool guides you though every step of the process of creating a highly professional finished document.</p><img alt="data_dictionary (125K)" src="https://www.navicat.com/link/Blog/Image/2024/20240726/data_dictionary.jpg" height="847" width="1001" /><h1 class="blog-sub-title">Powerful Data Profiling</h1><p>In the May 7 blog article "<a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2425-data-profiling-in-navicat-17.html" target="_blank">Data Profiling in Navicat 17</a>", we learned about the brand new Data Profiling tool. It provides a visual and comprehensive view of your data at the click of a button!You can find the Data Profiling tool in all Enterprise editions of Navicat database development and management tools, including Navicat for MySQL 17.</p><img alt="data_profiler (200K)" src="https://www.navicat.com/link/Blog/Image/2024/20240726/data_profiler.jpg" height="855" width="778" /><p>As you can see in the above screen capture, the Data Profiling tool offers a range of visual charts to represent the profiling results. These charts are a great way to analyze data types, formats, distributions, and informative statistics within your datasets.</p><h1 class="blog-sub-title">Table Profiles</h1><p>The "<a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2427-exploring-table-profiles-in-navicat-17.html" target="_blank">Exploring Table Profiles in Navicat 17</a>" blog article explored the new Table Profile feature, which allows us to save different combinations of filters, sort orders, and column displays that are frequently used for the table.</p><img alt="table_profile (119K)" src="https://www.navicat.com/link/Blog/Image/2024/20240726/table_profile.jpg" height="531" width="683" /><p>Having the ability to save table profiles is a tremendous time saver because you can switch between multiple configurations quickly, without having to reconfigure the table each time you access it.</p><h1 class="blog-sub-title">Improvements to the Query Designer</h1><p>You'll be pleased to know that there have been a couple of notable improvements to the Navicat for MySQL Query Designer. These include:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Pinned Results: query result may be retained for later reference by the click of a button!</li><li>Visual Query Explain: the Query Explain feature has been enhanced to utilize graphics. <p><img alt="visual_query_explain (109K)" src="https://www.navicat.com/link/Blog/Image/2024/20240726/visual_query_explain.jpg" height="820" width="836" /></p></li></ul><h1 class="blog-sub-title">Manage Connections: Multiple Connection Properties In One Interface</h1><p>Navicat for MySQL 17 offers the most straightforward process for initiating connections yet. It includes an advanced filter and search feature to help you quickly locate specific server types. You can also organize your connections with stars, colors and groups, or hide those which are seldom needed.</p><img alt="connection_properties (203K)" src="https://www.navicat.com/link/Blog/Image/2024/20240726/connection_properties.jpg" height="672" width="962" /><h1 class="blog-sub-title">Other Notable Improvements</h1><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Navicat URI: The server object URI may now be shared among team members, thus promoting collaboration.</li><li>BI: As described in this <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/2665-navicat-bi-tutorial-creating-a-workspace-and-data-source.html" target="_blank">Navicat BI tutorial</a>, all charts on a dashboard using the same data source can now be interconnected. Hence, selecting any data point on one of the charts instantly updates all of the other charts on the same dashboard page that share the same data source to reflect your selection.</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>Today's blog covered just a few of the features that you'll find in the new <a class="default-links" href="https://navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL 17</a>.</p><p>You can read more about all the new features and improvements in the <a class="default-links" href="https://navicat.com/en/navicat-17-highlights" target="_blank">Highlights page</a>. </p><p>Navicat for MySQL 17 available for the Windows, Linux, and macOS operating systems on the <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-mysql" target="_blank">product download page</a>.</p></body></html>]]></description>
</item>
<item>
<title>Creating Custom Fields In Navicat BI: Concatenated Fields</title>
<link>https://www.navicat.com/company/aboutus/blog/2679-creating-custom-fields-in-navicat-bi-concatenated-fields.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Creating Custom Fields In Navicat BI: Concatenated Fields</title></head><body><b>Jul 19, 2024</b> by Robert Gravelle<br/><br/><p>Welcome to part 2 in the Creating Custom Fields In Navicat BI series. Part 1 laid the groundwork for adding custom fields to your Navicat BI charts, starting with Type-Changed Fields. Today's blog will continue with Concatenated Fields. As with the last article, we'll be using a data source that connects to the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a>.</p><h1 class="blog-sub-title">What Is Concatenation?</h1><p>Concatenation is the appending of one string to another. It's commonly employed in queries to combine first and last name fields together. Case in point, the actor table in the "dvdrental" sample database splits actors' names into first_name and last_name fields. We can include both in a single column of the result by utilizing the concat() function:</p><img alt="concatenation_in_query (55K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/concatenation_in_query.jpg" height="545" width="497" /><p>Notice the passing of the space delimiter as the second input parameter; without it the names would be combined into a single word.</p><h1 class="blog-sub-title">Adding a Concatenated Field To the Rentals by Category Data Source</h1><p>In the recent blog on <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2677-creating-custom-fields-in-navicat-bi-type-changed-fields.html" target="_blank">Type-Changed Fields</a> we created a Vertical Stacked Bar Chart that shows daily sales for each movie category:</p><img alt="avg_sales_by_date_chart (245K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/avg_sales_by_date_chart.jpg" height="984" width="1280" /><p>We'll now modify that chart so that categories include IDs, so that "Comedy" will now appear as "Comedy (5)".</p><p>To do that, we'll have to modify the "Rentals by Category" data source, which supplies the data that populates the chart.</p><p>Locate and double-click the "Rentals by Category" data source in the BI workspace (Hint: if you have a lot of items in your workspace, you can click on the "Data Source" toggle button to only show data sources):</p><img alt="rentals_by_category_data_source_in_workspace (35K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/rentals_by_category_data_source_in_workspace.jpg" height="211" width="529" /><p>To add a new Concatenated Field to the data source, select New Custom Field -> Concatenated Field... from the menu:</p><img alt="concatenated_field_menu_item (24K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/concatenated_field_menu_item.jpg" height="172" width="464" /><p>That opens the New Concatenated Field dialog. We can see that Navicat already included the category_id in the Body textarea. Place it within parentheses "()" and add the name field in front of it so that the contents of the Body field are:</p><pre>["Sales per Category".name] (["Sales per Category".category_id])</pre><img alt="new_concatenated_field_dialog (54K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/new_concatenated_field_dialog.jpg" height="552" width="642" /><p>In the Target Field Name, enter "category_id_and_name" and click OK to create the new field. Our new field will appear in the data grid with a blue header:</p><img alt="updated_sales_per_category_data_source (132K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/updated_sales_per_category_data_source.jpg" height="679" width="788" /><h1 class="blog-sub-title">Updating Categories In the Average Sales by Date Chart</h1><p>Now all that's left to do is replace the "name" field in the "Average Sales by Date" chart with our new Concatenated Field. To do that, you'll first need to open the chart by locating and double-clicking it in the BI workspace (Hint: if you have a lot of items in your workspace, you can click on the "Chart" toggle button to only show charts):</p><img alt="avg_sales_by_date_chart_in_workspace (47K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/avg_sales_by_date_chart_in_workspace.jpg" height="242" width="622" /><p>Next, we can simply drag-and-drop the "category_id_and_name" field from the data source field list to the chart Group:</p><img alt="dragging_and_dropping_category_id_and_name_field_to_group (43K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/dragging_and_dropping_category_id_and_name_field_to_group.jpg" height="239" width="478" /><p>That will instantly cause the chart to refresh. Notice that the legend values now include IDs:</p><img alt="avg_sales_by_date_chart_with_concatenated_category_field (114K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/avg_sales_by_date_chart_with_concatenated_category_field.jpg" height="617" width="744" /><p>Hovering the cursor over a vertical bar in the chart shows all of the data for that day, with the category under the cursor in bold:</p><img alt="hovering_over_a_chart_category (83K)" src="https://www.navicat.com/link/Blog/Image/2024/20240719/hovering_over_a_chart_category.jpg" height="626" width="366" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog covered how to use Concatenated Fields in your Navicat BI data sources and charts. It is one of five custom field types, which include: Type-Changed, Concatenated, Mapped, Custom-Sorted, and, Calculated. Next week, we'll learn more about mapped fields.</p><p>You can download Navicat BI for a <a class="default-links" href="https://navicat.com/download/navicat-bi" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems. </p></body></html>]]></description>
</item>
<item>
<title>Creating Custom Fields In Navicat BI: Type-Changed Fields</title>
<link>https://www.navicat.com/company/aboutus/blog/2677-creating-custom-fields-in-navicat-bi-type-changed-fields.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Creating Custom Fields In Navicat BI: Type-Changed Fields</title></head><body><b>Jul 11, 2024</b> by Robert Gravelle<br/><br/><p>Back in the sneak peek at Navicat 17, we were introduced to a couple of new Business Intelligence (BI) features, namely Chart Interaction and Calculated Fields. It bears stating that Calculated Field are not the only type of custom field available in Navicat BI. In fact, there are five: Type-Changed, Concatenated, Mapped, Custom-Sorted, and, of course, Calculated. This blog will lay the groundwork for adding custom fields to your charts, starting with Type-Changed Fields. Over the next several weeks, each blog will cover a different field type. As in previous blog installments, we'll use a data source that connects to the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a>.</p><h1 class="blog-sub-title">Changing a Field's Type</h1><p>Here's the SELECT statement that fetches sales for each movie category:</p><pre>SELECT  c.category_id,  c.name,  p.amount,  r.rental_dateFROM   payment as p    LEFT JOIN      rental AS r on p.rental_id = r.rental_id    LEFT JOIN      inventory AS i ON r.inventory_id = i.inventory_id    LEFT JOIN      film_category AS fc ON i.film_id = fc.film_id    LEFT JOIN      category AS c ON fc.category_id = c.category_idORDER BY c.category_id;</pre><p>It is similar to the query that we saw in the previous tutorial on charts, but with two important differences:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"> <li>the field list includes the rental_date</li> <li>the query doesn't aggregate sales by category</li></ul><p>We can see that the rental_date field contains a DateTime:</p><img alt="sales_per_category_data_source (117K)" src="https://www.navicat.com/link/Blog/Image/2024/20240711/sales_per_category_data_source.jpg" height="648" width="808" /><p>Now suppose that we'd like to remove the time portion of the dates. We could edit the underlying query, or, we could simply add a new Type-Changed field to the existing data source.  To do that, we'll click on the rental_date header to select it and then click on the New Custom Field button and choose "Type-Changed Field..." from the context menu:</p><img alt="type_changed_field_item (85K)" src="https://www.navicat.com/link/Blog/Image/2024/20240711/type_changed_field_item.jpg" height="540" width="622" /><p>Having selected the rental_date column prior to clicking the New Custom Field button, Navicat knows to make a copy of that field. Let's call our new field "rental_date_no_time" and make it a Date type:</p><img alt="new_type_changed_field_dialog (38K)" src="https://www.navicat.com/link/Blog/Image/2024/20240711/new_type_changed_field_dialog.jpg" height="552" width="642" /><p>That will allow us to break down sales by date in charts.</p><p>After clicking the OK button, we can see the new field in the field list and data table:</p><img alt="rental_date_no_time_field (125K)" src="https://www.navicat.com/link/Blog/Image/2024/20240711/rental_date_no_time_field.jpg" height="561" width="770" /><p><table border="1" cellspacing="0" cellpadding="7"><tr><td>Quick hint: If you ever need to convert a DateTime field into a timestamp, you can choose Number from the Target Type Field drop-down in the New Type-Changed Field dialog:<p><img alt="timestamp (196K)" src="https://www.navicat.com/link/Blog/Image/2024/20240711/timestamp.jpg" height="537" width="772" /></p></td></tr></table> </p><p>We can now use our new field in a chart. Here's a Vertical Stacked Bar Chart that shows daily sales for each movie category:</p><img alt="avg_sales_by_date_chart (245K)" src="https://www.navicat.com/link/Blog/Image/2024/20240711/avg_sales_by_date_chart.jpg" height="984" width="1280" /><h1 class="blog-sub-title">Customizing Dates In a Chart</h1><p>It should be noted that we further customize the format of Date and Time fields in the chart itself. For instance, we could change the dates to a "DD MMM YYYY" format by selecting it from the Date Formats section of the Data properties:</p><img alt="date_properties (37K)" src="https://www.navicat.com/link/Blog/Image/2024/20240711/date_properties.jpg" height="801" width="268" /><p>The new format will be immediately reflected in the chart:</p><img alt="chart_with_custom_date_format (123K)" src="https://www.navicat.com/link/Blog/Image/2024/20240711/chart_with_custom_date_format.jpg" height="620" width="768" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog covered how to use Type-Changed Fields in your Navicat BI data sources. It is one of five custom field types, which include: Type-Changed, Concatenated, Mapped, Custom-Sorted, and, Calculated. Over the next several weeks, we'll go over each of the remaining four custom field types.</p><p>You can download Navicat BI for a <a class="default-links" href="https://navicat.com/download/navicat-bi" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems. </p></body></html>]]></description>
</item>
<item>
<title>Navicat BI Tutorial: Chart Design and Dashboards</title>
<link>https://www.navicat.com/company/aboutus/blog/2672-navicat-bi-tutorial-chart-design-and-dashboards.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat BI Tutorial: Chart Design and Dashboards</title></head><body><b>Jul 3, 2024</b> by Robert Gravelle<br/><br/><p>Business Intelligence (BI) is the practice of transforming data into actionable insights that empower organizations to streamline productivity and achieve better overall performance. This blog recently introduced the new Navicat BI, which is a tool that helps BI professionals better understand their data through the creation of data visualizations such as charts, graphs, and dashboards. Last week's blog walked us through the process of creating a workspace and data source in Navicat BI. Today's post will cover how to design an interactive chart and present it within a dashboard. </p><h1 class="blog-sub-title">Building the Total Sales Percentages by Category Chart</h1><p>Recall that, in last week's tutorial, we added the Dvdrental data source to our workspace. We will now use it to populate the Total Sales Percentages by Category chart.</p><p>To open the Chart Designer, click on the New Chart button in the main toolbar of the BI Workspace window:</p><img alt="new_chart_button (30K)" src="https://www.navicat.com/link/Blog/Image/2024/20240703/new_chart_button.jpg" height="204" width="559" /><p>That will bring up a dialog prompt where you can enter the Chart Name as well as assign the Data Source. Having previously created the Dvdrental data source, we can now select it to populate our new chart:</p><img alt="new_chart_dialog (19K)" src="https://www.navicat.com/link/Blog/Image/2024/20240703/new_chart_dialog.jpg" height="203" width="362" /><p>In the Chart Designer, the various chart types are located in a toolbar above the chart fields. We'll select The Pie Chart type by clicking its icon:</p><img alt="pie_chart_button (72K)" src="https://www.navicat.com/link/Blog/Image/2024/20240703/pie_chart_button.jpg" height="312" width="998" /><p>It has only two fields: the Group and Value. We can drag the fields from the data source on the left to the drop-downs. We'll group by name and show the value of the sum field:</p><img alt="pie_chart_with_fields_populated (140K)" src="https://www.navicat.com/link/Blog/Image/2024/20240703/pie_chart_with_fields_populated.jpg" height="912" width="999" /><h3>Customizing the Chart</h3><p>There are many ways to customize a chart via the Properties Pane to the right of the chart. For instance, we can set the title as follows:</p><img alt="title_properties (45K)" src="https://www.navicat.com/link/Blog/Image/2024/20240703/title_properties.jpg" height="828" width="382" /><p>There are many other properties in the Data pane. There, we can show or hide various data/chart elements as well as change the color palette. These may be selected from a predefined color scheme, or assigned to each specific value. Here's the chart without data values and a more colorful palette:</p><img alt="customized_chart (148K)" src="https://www.navicat.com/link/Blog/Image/2024/20240703/customized_chart.jpg" height="857" width="1013" /><h1 class="blog-sub-title">Presenting a Chart Within a Dashboard</h1><p>Just as we did to create a new chart, we can click the New Dashboard button to create a dashboard:</p><img alt="new_dashboard_button (30K)" src="https://www.navicat.com/link/Blog/Image/2024/20240703/new_dashboard_button.jpg" height="204" width="559" /><p>Doing so will again present a dialog prompt, where we can assign a name to our dashboard. We'll call it "Total Sales Percentages by Category Dashboard":</p><img alt="dashboard_dialog (16K)" src="https://www.navicat.com/link/Blog/Image/2024/20240703/dashboard_dialog.jpg" height="152" width="362" /><p>A dashboard is a place to combine multiple views of data to glean richer insights. As such, a single dashboard may contain multiple (interconnected) charts, text, images, shapes, and other elements that help gain new insights on existing data. You can even give a dashboard a background image, as seen here in our film categories dashboard: </p><img alt="film_categories_dashboard (296K)" src="https://www.navicat.com/link/Blog/Image/2024/20240703/film_categories_dashboard.jpg" height="984" width="1280" /><p>Dashboards may be saved as external files to be shared with colleagues or presented directly from Navicat BI.</p><h1 class="blog-sub-title">Conclusion</h1><p>This tutorial went through the process of creating a dashboard in Navicat BI. In part 1, we added a new workspace and built the data source. In today's blog, we learned how to design an interactive chart and present it within a dashboard.</p><p>You can download Navicat BI for a <a class="default-links" href="https://navicat.com/download/navicat-bi" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems. </p></body></html>]]></description>
</item>
<item>
<title>Navicat BI Tutorial: Creating a Workspace and Data Source</title>
<link>https://www.navicat.com/company/aboutus/blog/2665-navicat-bi-tutorial-creating-a-workspace-and-data-source.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat BI Tutorial: Creating a Workspace and Data Source</title></head><body><b>Jun 21, 2024</b> by Robert Gravelle<br/><br/><p>Navicat BI is a tool that helps you organize and transform your data into meaningful information through reporting. This is achieved via interactive dashboards that summarize the insights gained, along with workspaces that may be easily shared with colleagues and business leaders to make informed decisions on both operational (day-to-day) and strategic (long-term) matters. Today's blog will walk you through the process of creating a workspace and data source Navicat BI. Next week's post will cover how to design an interactive chart and present it within a dashboard. </p><h1 class="blog-sub-title">The Chart at a Glance</h1><p>The chart that we will be building will summarize how much was spent on each movie category of the free PostgreSQL <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a>.  The data will be presented as a pie chart where each piece will represent a category. Here's a sneak peek at what the chart will look like:</p><img alt="chart_preview (65K)" src="https://www.navicat.com/link/Blog/Image/2024/20240621/chart_preview.jpg" height="673" width="635" /><h1 class="blog-sub-title">Creating the Workspace</h1><p>Navicat BI is available as a stand-alone product and is also integrated into Navicat Premium and Enterprise Editions. For the purposes of this tutorial, we'll be working in Navicat Premium 17.</p><p>The first step is to create a new workspace. To do that:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Locate and click the BI button in the main Button Bar at the top of the main Navicat window.</li><li>Click the New Workspace button:<p><img alt="new_workspace_button (57K)" src="https://www.navicat.com/link/Blog/Image/2024/20240621/new_workspace_button.jpg" height="204" width="1010" /></p></li></ul><p>That will launch the BI feature:</p><img alt="new_workspace (94K)" src="https://www.navicat.com/link/Blog/Image/2024/20240621/new_workspace.jpg" height="672" width="962" /><h1 class="blog-sub-title">Creating the Data Source</h1><p>The BI feature lets you specify and integrate data from a variety of data sources, including databases (or any ODBC data source), external files such as Excel, Access, CSV, and even data stored on your computer, network, or a URL.</p><p>The new workspace clearly shows the steps to create a data visualization:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Create data source</li><li>Design chart</li><li>Present your dashboard</li></ul><p>Since I already have the dvdrental database in Navicat, I'll build the query there and then import it into the BI workspace.</p><p>Here is the full SQL statement. It includes an aggregation on the amount column of the payment table that sums its values for each category:</p><pre>SELECT  c.category_id,  c.name,  sum(p.amount)FROM   payment as p    LEFT JOIN      rental AS r on p.rental_id = r.rental_id    LEFT JOIN      inventory AS i ON r.inventory_id = i.inventory_id    LEFT JOIN      film_category AS fc ON i.film_id = fc.film_id    LEFT JOIN      category AS c ON fc.category_id = c.category_idGROUP BY c.category_id, c.nameORDER BY c.category_id;</pre><p>Here is the above query in the Navicat Query Editor, along with the results.  Note that the query was saved with the name "Sum of Payments per Movie Category".  We'll need to recall the name later in order to import the query:</p><p><img alt="sum_of_payments_per_movie_category_query (111K)" src="https://www.navicat.com/link/Blog/Image/2024/20240621/sum_of_payments_per_movie_category_query.jpg" height="827" width="558" /></p><p>Now we'll create the data source in BI workspace:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Click the New Data Source button at the top of the BI workspace window.</li><li>Name the data source "Dvdrental" and select PostgreSQL for the Database connection:<p><img alt="data_source_name_and_db (61K)" src="https://www.navicat.com/link/Blog/Image/2024/20240621/data_source_name_and_db.jpg" height="712" width="802" /></p></li><li>Click Next to continue.</li><li>Select the PostgreSQL connection that contains the dvdrental database (I only have the one) and click OK to create the data source:<p><img alt="postgresql_connection (52K)" src="https://www.navicat.com/link/Blog/Image/2024/20240621/postgresql_connection.jpg" height="712" width="802" /></p></li></ul><p>We can now see the PostgreSQL connection in the Connections pane. If we expand it to see the dvdrental database, we can see the New Data Source Query item above the tables. Clicking it will open a new query editor. We could write the query there, but since we already did, we can click the Import Query button instead:</p><img alt="import_query_button (81K)" src="https://www.navicat.com/link/Blog/Image/2024/20240621/import_query_button.jpg" height="811" width="545" /><p>That will launch the Import Query dialog, where we can select the query that we built earlier:</p><img alt="import_query_dialog (70K)" src="https://www.navicat.com/link/Blog/Image/2024/20240621/import_query_dialog.jpg" height="512" width="762" /><p>Click the Import button to add it to our workspace.</p><img alt="dvdrental_data_source (31K)" src="https://www.navicat.com/link/Blog/Image/2024/20240621/dvdrental_data_source.jpg" height="204" width="559" /><h1 class="blog-sub-title">Going Forward</h1><p>With the data source in place, we're ready to design the chart. We'll do that in next week's blog. In the meantime, feel free to familiarize yourself with Navicat BI's many chart types, which include Bar Charts, Line/Area Charts, Bar/line Charts, Pie charts, Heatmap/Treemap, Pivot Table, Waterfall Chart, Scatter Chart, Value, Control, KPI/Gauge, and more!</p><p>You can download Navicat BI for a <a class="default-links" href="https://navicat.com/download/navicat-bi" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems. </p></body></html>]]></description>
</item>
<item>
<title>Unlock the Power of Data with Navicat BI</title>
<link>https://www.navicat.com/company/aboutus/blog/2439-unlock-the-power-of-data-with-navicat-bi.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Unlock the Power of Data with Navicat BI</title></head><body><b>Jun 5, 2024</b> by Robert Gravelle<br/><br/><p>Business Intelligence (BI) is the practice of transforming data into actionable insights that empower business leaders to enhance overall performance. Once of the most important phases of this process is the data exploration and visualization phase.  It entails the organization and transformation of data into meaningful information through reporting. To make data more understandable, BI professionals create data visualizations such as charts, graphs, and dashboards. These visual representations help decision-makers quickly grasp complex information. That's where Navicat BI comes in. Formerly known as Navicat Chart Creator, Navicat BI produces interactive dashboards that summarize the insights gained. Workspaces may be easily shared with colleagues and business leaders to make informed decisions on both operational (day-to-day) and strategic (long-term) matters. Today's blog article will touch upon a few of Navicat BI's many features.</p><img alt="dashboard (194K)" src="https://www.navicat.com/link/Blog/Image/2024/20240605/dashboard.jpg" height="734" width="1249" /><h1 class="blog-sub-title">Choose from a Variety of Data Sources</h1><p>Navicat BI lets you specify and integrate data from a variety of data sources with ease. Not only does it come with pre-built data connectors for databases such as MySQL, PostgreSQL, SQL Server, Oracle, SQLite, MariaDB, MongoDB and Snowflake, but it can also import data from any ODBC data sources including Sybase and DB2.</p><p>Data sources are not limited to databases either; data may reside within external files such as Excel, Access, CSV, as well as from data stored on your computer, network, or a URL. </p><p>Regardless of data source type(s), charts are updated in real time so that they always reflect any changes in the underlying data.</p><img alt="data_source (153K)" src="https://www.navicat.com/link/Blog/Image/2024/20240605/data_source.jpg" height="734" width="1249" /><h1 class="blog-sub-title">Custom Charts</h1><p>To better inspire your organization to make better decisions, Navicat BI provides a wealth of different chart types to best illustrate your data in a meaningful way. These include Bar Charts, Line/Area Charts, Bar/line Charts, Pie charts, Heatmap/Treemap, Pivot Table, Waterfall Chart, Scatter Chart, Value, Control, KPI/Gauge, and more. By selecting the best chart type for your data ensures that your questions are easily answered and your presentations convey the message you wish to communicate.</p><img alt="chart (169K)" src="https://www.navicat.com/link/Blog/Image/2024/20240605/chart.jpg" height="734" width="1249" /><h1 class="blog-sub-title">Effective Dashboards</h1><p>Dashboards provide an interactive display of your charts. A single dashboard may combine multiple views of data to glean richer insights. All of the charts on a dashboard that share the same data source may be interconnected so that selecting a data point on one of the charts instantly updates all the other charts to reflect your selection.</p><img alt="linked_charts (95K)" src="https://www.navicat.com/link/Blog/Image/2024/20240605/linked_charts.jpg" height="687" width="976" /><h1 class="blog-sub-title">Collaboration Made Easy!</h1><p>Navicat BI is fully integrated with Navicat Collaboration, allowing you to synchronize your BI workspaces to Navicat's cloud solutions.  You can then invite your teammates to the project, where they can create and edit together.</p><h1 class="blog-sub-title">Conclusion</h1><p>This blog article covered just a few of Navicat BI's many features. In future articles, we'll delve into how to use Navicat BI in greater detail. </p><p>Navicat BI is available as a stand-alone product and is also integrated into Navicat Premium and Enterprise Editions. You can download it for a <a class="default-links" href="https://navicat.com/download/navicat-bi" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems. </p></body></html>]]></description>
</item>
<item>
<title>Introducing Navicat Data Modeler 4</title>
<link>https://www.navicat.com/company/aboutus/blog/2436-introducing-navicat-data-modeler-4.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Introducing Navicat Data Modeler 4</title></head><body><b>May 27, 2024</b> by Robert Gravelle<br/><br/><p>Having covered Navicat version 17 over the course of the last few weeks, it's time that we turned our attention to two other noteworthy releases, namely Navicat Data Modeler and Navicat BI (previously called Navicat Chart Creator). Today's blog will introduce <a class="default-links" href="https://www.navicat.com/en/navicat-data-modeler-4-highlights" target="_blank">Navicat Data Modeler 4</a>, while Navicat BI will be featured next week.</p><h1 class="blog-sub-title">Navicat Data Modeler 4: First Glance</h1><p>Navicat Data Modeler is a stand-alone product that features many powerful capabilities, without sacrificing and user-friendliness. It's perfectly suited to data modeling tasks of any complexity, making it an ideal choice for users of all levels, from novice to professional. Even those who are new to modeling will find Navicat Modeler 4's intuitive interface and seamless experience to be of great value.</p><img alt="navicat_modeler_4 (212K)" src="https://www.navicat.com/link/Blog/Image/2024/20240527/navicat_modeler_4.jpg" height="772" width="990" /><h1 class="blog-sub-title">Craft All of Your Models In a Unified Space</h1><p>In Navicat Modeler 4, a single workspace may incorporate several databases of different types, along with models, diagrams, and other related objects. This approach allows users to visualize different objects within the same diagram as well as facilitate efficient switching between models, cross-model management, and sharing of model workspaces. The end result is increased collaboration, which leads to enhanced overall productivity. Finally, the use of workspaces simplifies the navigation of complex systems, as well as promote a better understanding of system components.</p><img alt="workspaces (105K)" src="https://www.navicat.com/link/Blog/Image/2024/20240527/workspaces.jpg" height="517" width="1028" /><h1 class="blog-sub-title">Design Your Diagrams with Ease</h1><p>Navicat Modeler 4's environment is both responsive and interactive, making diagram creation amazingly easy. You're not constrained to employing a dogmatic diagramming style or approach. By contrast, Navicat Modeler 4 supports many different types of models, notations, and representations. By keeping everything simple and concise, you are better able to concentrate on designing your models. Some notable features include:</p><ul><li>Layers: related elements may be arranged together on separate layers. </li><li>Locking/grouping option: elements may also be grouped together, so that they stay in place or move together as a single unit during editing or repositioning.<p><img alt="group_elements (146K)" src="https://www.navicat.com/link/Blog/Image/2024/20240527/group_elements.jpg" height="564" width="851" /></p></li><li>Auto-layout upgrade: as the name suggests, the auto-layout command automatically repositions and aligns elements in a visually pleasing way. Auto-layout may be applied to the entire diagram, selected elements, or all elements under the same layer. </li><li>New Present Mode: displays the model in a full-screen view, removes distractions, and provides a focused view of the diagram.</li></ul><h1 class="blog-sub-title">Data Dictionary</h1><p>The new Data Dictionary provides documentation and descriptions for each data element within databases across various server platforms. A wizard guides you though every step of the process to create a highly professional finished document. Documents may be exported as PDFs for sharing with team members and other stakeholders. </p><img alt="data_dictionary (137K)" src="https://www.navicat.com/link/Blog/Image/2024/20240527/data_dictionary.jpg" height="867" width="1006" /><h1 class="blog-sub-title">Keep your Model and Database Synchronized</h1><p>Over time, models can become out-of-date as the database evolves. That's where Navicat Modeler 4's Synchronization tool can help. By comparing and updating your model based on changes made in the database, you can always be certain that the model accurately reflects the current database structure. Synchronizing regularly minimizes discrepancies between the model and the database, thereby maintaining model integrity.</p><img alt="synchronize database to model (71K)" src="https://www.navicat.com/link/Blog/Image/2024/20240527/synchronize%20database%20to%20model.jpg" height="701" width="655" /><h1 class="blog-sub-title">Compare Modeling Projects</h1><p>The Compare Model Workspace helps find and highlight all the differences between both internal and external workspaces in mere minutes. Maintaining consistency across different versions or branches of the model ultimately leads to superior models all through the development and deployment process.</p><img alt="compare_model_workspace (100K)" src="https://www.navicat.com/link/Blog/Image/2024/20240527/compare_model_workspace.jpg" height="667" width="902" /><h1 class="blog-sub-title">Conclusion</h1><p>Today's blog gave us a preview of what we can expect from Navicat Data Modeler 4. You can download it for a <a class="default-links" href="https://www.navicat.com/en/download/navicat-data-modeler" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Managing Connections in Navicat 17</title>
<link>https://www.navicat.com/company/aboutus/blog/2434-managing-connections-in-navicat-17.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Managing Connections in Navicat 17</title></head><body><b>May 17, 2024</b> by Robert Gravelle<br/><br/><p>Navicat 17 gives you more ways to connect to your database instances than ever before. In addition to traditional connection strings, Navicat 17 also supports URI connections. These provide access objects with ease, regardless of their location. There's also a new Manage Connection feature, which allows you to establish and manage connections through a single user-centric interface.  In today's blog, we'll learn about both these exciting new features and how to use them to organize our connections with stars, colors and groups.</p><h1 class="blog-sub-title">How URIs Provide Direct Access for Seamless Collaboration</h1><p>Having the ability to share the server object URI among team members is a surefire way to increase collaboration. Navicat now offers a convenient shortcut to access the server object seamlessly, regardless of team members' locations. Simply clicking on the URI immediately opens the object in Navicat, thus avoiding having to manually locate the object. Saving time in in this way enables team members to concentrate on their tasks without burdening them with additional work.</p><img alt="navicat_uri (108K)" src="https://www.navicat.com/link/Blog/Image/2024/20240517/navicat_uri.jpg" height="549" width="877" /><h1 class="blog-sub-title">Establishing a Connection</h1><p>Thanks to Navicat's user-centric interface, it's never been easier to connect to both local and cloud database instances. It greatly simplifies the connection setup, catering to users with varying levels of technical knowledge. The advanced filtering and search capabilities enable rapid and precise location of specific server types. The ability to manage multiple connection profiles and create URI-based connections further enhances efficiency as well as the overall user experience.</p><img alt="manage_connection (160K)" src="https://www.navicat.com/link/Blog/Image/2024/20240517/manage_connection.jpg" height="732" width="902" /><h1 class="blog-sub-title">Advanced Connection Management Features</h1><p>The Manage Connections feature introduces a new method for handling multiple connection properties from a central location, enabling efficient batch operations. You can tailor your connection management to your specific needs by prioritizing key connections with stars, assigning colors based on their importance, or grouping them together. With Manage Connections, everything is neatly organized and easily accessible, saving time and effort when searching for specific connections.</p><p>To launch Manage Connections, hover the cursor over the "My Connections" header in the Object Explorer pane. You'll see a cog wheel appear on the right-hand side.</p><img alt="manage_connection_button (37K)" src="https://www.navicat.com/link/Blog/Image/2024/20240517/manage_connection_button.jpg" height="490" width="229" /><p>Click it to open the Manage Connections tab:</p><img alt="manage_connection_tab (161K)" src="https://www.navicat.com/link/Blog/Image/2024/20240517/manage_connection_tab.jpg" height="571" width="1096" /><h3>Grouping Connections</h3><p>You may wish to group connections together by project, server, or any number of shared attributes. To create a new group in Navicat, simply right-click (or Control-click in mac) anywhere in the Manage Connections pane and select Manage Group -> New Group from the context menu:</p><img alt="create_new_group (31K)" src="https://www.navicat.com/link/Blog/Image/2024/20240517/create_new_group.jpg" height="277" width="549" /><p>A new folder will then appear with a textbox beside it, ready to accept the group name.</p><p>Once a group folder has been created, you can drag any connections into it to group connections.</p><h3>Adding a Color</h3><p>Similar to grouping, assigning colors to connections can help organize them. To assign a color to a connection, right-click (or Control-click in mac) the connection name and select a Color from the context menu:  </p><img alt="add_color (43K)" src="https://www.navicat.com/link/Blog/Image/2024/20240517/add_color.jpg" height="321" width="459" /><h3>Assigning a Star</h3><p>Stars are another way to highlight particular connections. To add a star to a connection, right-click (or Control-click in mac) the connection name and choose the "Add Star" command from the context menu:</p><img alt="add_star_command (33K)" src="https://www.navicat.com/link/Blog/Image/2024/20240517/add_star_command.jpg" height="251" width="334" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we explored the many ways that Navicat 17's single user-centric interface allows us to establish and manage connections more efficiently than ever before.</p><p>Now that Navicat 17 is out, you can download it for a <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">14-day fully functional FREE trial</a>.  It's available for Windows, macOS, and Linux operating systems.</p></body></html>]]></description>
</item>
<item>
<title>Exploring Table Profiles in Navicat 17</title>
<link>https://www.navicat.com/company/aboutus/blog/2427-exploring-table-profiles-in-navicat-17.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Exploring Table Profiles in Navicat 17</title></head><body><b>May 9, 2024</b> by Robert Gravelle<br/><br/><p>With less than a week to go before the release of Navicat 17, this is the perfect time to delve into the new Table Profile feature. It allows us to save different combinations of filters, sort orders, and column displays that are frequently used for the table. So, without any further ado, let's get started!</p><h1 class="blog-sub-title">Creating a Table Profile</h1><p>The free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a> contains a number of tables, views, and functions pertaining to the running of a fictional movie rental store. Here's the rental table in Navicat 17:</p><img alt="dvdrental_db_rental_table (204K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/dvdrental_db_rental_table.jpg" height="672" width="837" /><p>By default, tables are shown in the Grid Viewer, with records sorted according to the table design, or by whatever order they were added to the table. Columns are presented in the same order in which they appear in the table design, from left to right. </p><p>However, we can change a table's appearance in several ways.  For instance, we can:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>apply filtering and sorting</li><li>freeze certain columns in place</li><li>hide columns</li><li>change the column order</li><li>alter a column's width</li></ul><p>Let's try a couple actions on the rental table.</p><p>To change a column's position, we can drag it to wherever we want it. Likewise, we can change the width of a column by dragging its edge.</p><p>Here is the rental_date column after being dragged to the first position and resized to be a little more compact:</p><img alt="rental date column moved and resized (95K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/rental%20date%20column%20moved%20and%20resized.jpg" height="582" width="341" /><p>We can also quickly sort a column by clicking the arrow in the top-right corner of the column header and selecting an option from the context menu:</p><img alt="sorting a column (29K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/sorting%20a%20column.jpg" height="165" width="322" /><p>Notice that the arrow now shows the sort order:</p><img alt="rental date column sorted in ascending order (67K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/rental%20date%20column%20sorted%20in%20ascending%20order.jpg" height="398" width="327" /><h3>Saving a Table Profile</h3><p>If we were now to close the rental table, the next time that we access it, all of the changes that we made would be lost. However, in Navicat 17, we can save them to a new Table Profile. To do that, we would click the Table Profile button in the Table Toolbar and then select either the "Save Profile As" menu command ("Save Profile" would also work, as we haven't saved a profile for the table yet):</p><img alt="Save Profile command (36K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/Save%20Profile%20command.jpg" height="217" width="330" /><p>A dialog prompt will appear where we can choose a name for our profile. After providing a descriptive name, we can click OK to create the new Table Profile:</p><img alt="Save As dialog (19K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/Save%20As%20dialog.jpg" height="161" width="402" /><h3>Loading a Table Profile</h3><p>As mentioned previously, reopening the rental table will cause our changes to be lost. Of course, having created a Table Profile, we can restore the table to its previous state by accessing it via Table Profile -> Load Profile -> [profile name]:</p><img alt="Loading a table profile (43K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/Loading%20a%20table%20profile.jpg" height="173" width="526" /><h1 class="blog-sub-title">Going Back to the Default Profile</h1><p>If ever you want to flip back to the default profile, you can do that by selecting Load Profile -> Default Profiles -> Quick Mode:</p><img alt="default profile (56K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/default%20profile.jpg" height="172" width="669" /><h1 class="blog-sub-title">Managing Profiles</h1><p>Selecting Table Profile -> Manage Profile from the Toolbar brings up the Profile screen.  There we can see which columns have been hidden, as well as what filters and sorting have been applied. It even shows a preview of the SQL utilized to fetch the underlying data:</p><img alt="Profile screen (42K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/Profile%20screen.jpg" height="592" width="722" /><p>We can delete or load any profile by selecting it and clicking on the Delete or Load button.</p><p>One final tip: we can open the directory on our device that contains the Profile definition files by right-clicking a profile (or Control-Click on mac) and choosing the Open Containing Folder... command from the context menu:</p><img alt="Open Containing Folder (18K)" src="https://www.navicat.com/link/Blog/Image/2024/20240509/Open%20Containing%20Folder.jpg" height="141" width="459" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to work with Navicat 17's new Table Profiles by creating one for the <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a>'s rental table.</p><p>On May 13, be sure to visit the <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium  (English Edition) product page</a> to learn more about version 17!</p></body></html>]]></description>
</item>
<item>
<title>Create a Data Dictionary in Navicat 17</title>
<link>https://www.navicat.com/company/aboutus/blog/2426-create-a-data-dictionary-in-navicat-17.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Create a Data Dictionary in Navicat 17</title></head><body><b>May 8, 2024</b> by Robert Gravelle<br/><br/><p>Navicat 17, which is due for release on May 13 (for English Edition), adds many new and exciting features. One of these is the Data Dictionary tool. It employs a series of GUI screens to guide you through the process of creating a professional quality document that provides descriptions for each data element within databases across various server platforms. In today's blog, we'll learn more about Data Dictionaries, as well as go through the steps to create one in Navicat 17.</p><h1 class="blog-sub-title">What Is a Data Dictionary? </h1><p>A Data Dictionary contains names, definitions, and attributes about data elements stored within a database.  It can also be used for data that is defined as part of an information system or research project.  It describes the meaning and purpose of each data element, and offers guidance on their interpretation, accepted meanings and representation.  Additionally, a Data Dictionary provides metadata concerning these data elements, which helps in delineating their scope, characteristics, as well as the rules which govern their usage and application. </p><p>Data Dictionaries are beneficial for several reasons. They:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>Ensure consistency and avoid data discrepancies throughout a project.</li>    <li>Establish standardized conventions for project-wide use.</li>    <li>Foster consistency in the collection and use of data among team members.</li>    <li>Facilitate streamlined data analysis.</li>    <li>Promote adherence to data standards.</li></ul><h1 class="blog-sub-title">Creating a Data Dictionary for the PostgreSQL dvdrental Database</h1><p>The PostgreSQL <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a> is a free download that you can use for learning and practicing PostgreSQL. As you might have guessed from its name, the DVD rental database represents the business processes of a DVD rental store.</p><p>We'll use Navicat 17's Data Dictionary tool to create documentation for it.</p><h1 class="blog-sub-title">Selecting the Database(s)</h1><p>Launching the Data Dictionary tool is easy; just select Tools -> Data Dictionary... from the main menu:</p><img alt="data_dictionary_command (52K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/data_dictionary_command.jpg" height="402" width="398" /><p>That will launch the first in a series of dialogs that will walk us through the process of creating the Data Dictionary. A process such as this one that guides users step-by-step through a series of tasks is known as a wizard.</p><p>This first dialog lets us select the database(s) that we'd like to document. Note that, if we select the database in the Navigation Pane before launching the Data Dictionary wizard, it will be pre-selected in the dialog. </p><img alt="choose_database_dialog (76K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/choose_database_dialog.jpg" height="681" width="759" /><p>Click the Next button to continue.</p><h1 class="blog-sub-title">Choosing Objects</h1><p>On the next screen, we can choose the database objects that we'd like to include in our Data Dictionary, as well as reorder the server, databases, and schemas. </p><img alt="choose_objects_dialog (109K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/choose_objects_dialog.jpg" height="816" width="664" /><p>Be default, the tool will generate some high level information about the database as well as definitions for all tables, views, and functions. However, we can choose to omit any of these if we wish by deselecting the associated checkbox.</p><p>There is a Search bar at the bottom of the screen to help locate objects:</p><img alt="search_feature (72K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/search_feature.jpg" height="816" width="664" /><h1 class="blog-sub-title">Templates</h1><p>Navicat provides a number of fully customizable templates:</p><img alt="templates (66K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/templates.jpg" height="816" width="664" /><p>Once we've selected a template, we can go ahead and customize every facet of its appearance. </p><img alt="template_customization (108K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/template_customization.jpg" height="816" width="1040" /><p>Different parts of the document are accessible via the headers at the top of the screen.  These include the:</p> <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Cover</li><li>Table of Content</li><li>Header/Footer</li><li>Main Content</li><li>Paper</li></ul><img alt="header_footer_customization (134K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/header_footer_customization.jpg" height="816" width="1040" /><h1 class="blog-sub-title">Generating the Document</h1><p>The last step is to save the Data Dictionary to a file. The document will be saved as a Portable Document Format (PDF) file.</p><img alt="set_file_path (39K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/set_file_path.jpg" height="480" width="603" /><p>Every step of document creation will be logged in real-time so that we can see if Navicat encountered any issues along the way.</p><img alt="processing_results (104K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/processing_results.jpg" height="480" width="603" /><p>We can click the Open button at the bottom of the screen to view the final product in the associated program:</p><img alt="data_dictionary_in_adobe (105K)" src="https://www.navicat.com/link/Blog/Image/2024/20240508/data_dictionary_in_adobe.jpg" height="689" width="693" /><h1 class="blog-sub-title">Conclusion</h1><p> In today's blog, we learned more about Data Dictionaries, as well as how easy it is to create one in Navicat 17. You can learn more about version 17 by visiting the <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium  (English Edition) product page</a> from May 13 onward!</p></body></html>]]></description>
</item>
<item>
<title>Data Profiling in Navicat 17</title>
<link>https://www.navicat.com/company/aboutus/blog/2425-data-profiling-in-navicat-17.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Data Profiling in Navicat 17</title></head><body><b>May 7, 2024</b> by Robert Gravelle<br/><br/><p>Last week's blog heralded the upcoming launch of Navicat 17, which is currently in Beta and scheduled to arrive on May 13 (English Edition)! As we saw, version 17 introduces a lot of exciting new features. One of the biggest is the Data Profiling tool. It provides a visual and comprehensive view of your data at the click of a button! In today's blog, we'll use it to obtain some quick statistics on the rental table of the free PostgreSQL <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a>.</p><h1 class="blog-sub-title">Launching the Data Profiling Tool</h1><p>As mentioned in the introduction, the Data Profiling tool requires little more than the click of a button to use. You'll find it in the toolbar of any table, view, or query result (highlighted in red below):</p><img alt="data_profiling_button (77K)" src="https://www.navicat.com/link/Blog/Image/2024/20240507/data_profiling_button.jpg" height="277" width="767" /><p>From there, you can choose to profile all records (the default) or add a filter to only profile rows which fit a given criteria:</p><img alt="profiling_options (57K)" src="https://www.navicat.com/link/Blog/Image/2024/20240507/profiling_options.jpg" height="219" width="695" /><h1 class="blog-sub-title">Filtering Records</h1><p>For datasets with many records, it is often useful to focus on a subset of the data. That's where the "Add Filter" option comes in. It allows us to add filters (and sorting) using the familiar "Filter &amp; Sort" feature. Let's say that we only want to profile records of the rental table whose rental date is in the first half of 2006. All we need to do is add a filter on the rental_date column that selects rows with values that are between Jan 1st, 2006 00:00:00AM and June 30th, 2006, at 11:59:59PM. Selecting the dates and times is a snap, thanks to the built-in Date and Time Picker!</p><img alt="filtering_by_rental_date (62K)" src="https://www.navicat.com/link/Blog/Image/2024/20240507/filtering_by_rental_date.jpg" height="409" width="385" /><p>One feature of the Data Profiler that you won't find in the "Filter &amp; Sort" tool is the ability to limit records to a certain number, like say, a thousand:</p><img alt="limit_records_feature (24K)" src="https://www.navicat.com/link/Blog/Image/2024/20240507/limit_records_feature.jpg" height="243" width="431" /><h1 class="blog-sub-title">Viewing Profiling Results</h1><p>Clicking the "Start Profiling", or ""Apply Data Settings" button after editing the criteria, runs the profiler on the rows which fit the selected filtering criteria. </p><p>Clicking on the column header shows the statistics for that field. These are shown in 2 places: under the column name and below the grid.</p><p>The kinds of stats you'll find include the percentage of Nulls vs. Non-nulls, as well as the number of distinct and unique values. There's even a value distribution chart! To view all of the values, you can either increase the column width or simply use the scrollbar at the bottom of the Value Distribution chart in the Column Statistics at the bottom of the screen:</p><img alt="customer_id_stats (182K)" src="https://www.navicat.com/link/Blog/Image/2024/20240507/customer_id_stats.jpg" height="851" width="767" /><h1 class="blog-sub-title">Changing the Layout</h1><p>There are a few options for changing how the data is presented.  For instance, we can show distributions by count or value:</p><img alt="distribution_by_value (20K)" src="https://www.navicat.com/link/Blog/Image/2024/20240507/distribution_by_value.jpg" height="279" width="225" /><p>We can also choose between a Compact or Detailed layout (Detailed is the default). Here are the rental table headers with the Compact layout:</p><img alt="compact_layout (77K)" src="https://www.navicat.com/link/Blog/Image/2024/20240507/compact_layout.jpg" height="276" width="769" /><h1 class="blog-sub-title">Getting More Specific</h1><p>Every bar of the distribution chart represents a real record in the underlying table, view, or query.  We can learn more about it by hovering the cursor over it. The popup box shows the value, along with how many times it appears within the dataset and what percentage that represents across all of the records:</p><img alt="hover_stats_on_column_header (16K)" src="https://www.navicat.com/link/Blog/Image/2024/20240507/hover_stats_on_column_header.jpg" height="258" width="184" /><p>Moreover, clicking a bar will Spotlight that record, which will home in on that row in the grid and display  statistics which are pertinent to that value:</p><img alt="spotlight_feature (136K)" src="https://www.navicat.com/link/Blog/Image/2024/20240507/spotlight_feature.jpg" height="852" width="767" /><p>Clicking the bar a second time will remove the Spotlight.</p><p>We can also see in the above image the full range of stats available in the Column Statistics section.  It includes additional figures, such as the number of Repeated values, Minimum and Maximum values, and many more.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we familiarized ourselves with Navicat 17's new Data Profiling tool by using it to obtain some quick statistics on the free  <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a>'s rental table.</p><p>On May 13, be sure to visit the <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium  (English Edition) product page</a> to learn more about version 17!</p></body></html>]]></description>
</item>
<item>
<title>Exploring Different Types of Constraints in PostgreSQL</title>
<link>https://www.navicat.com/company/aboutus/blog/2423-exploring-different-types-of-constraints-in-postgresql.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>Exploring Different Types of Constraints in PostgreSQL</title></head><body><b>May 3, 2024</b> by Robert Gravelle<br/><br/>    <p>One of PostgreSQL's key features is the ability to enforce various constraints on data, ensuring data integrity and reliability. Today's blog article will provide an overview of PostgreSQL's various constraint types and explore their usage with examples from the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a>.</p>    <h1 class="blog-sub-title">1. Check Constraints:</h1>    <p>Check constraints allow you to specify conditions that must be met for a column when inserting or updating data. This ensures that only valid data is stored in the database. For instance, if you have a "customers" table and want to ensure that the age of a customer is at least 18, you can add a check constraint like this:</p>    <pre><code>ALTER TABLE customersADD CONSTRAINT check_age CHECK (age >= 18);    </code></pre>    <h1 class="blog-sub-title">2. Not-Null Constraints:</h1>    <p>Not-null constraints ensure that a column cannot contain null values. For example, in the "customers" table, if you want to ensure that every customer has a valid email address, you can enforce a not-null constraint on the email column like this:</p>    <pre><code>ALTER TABLE customersALTER COLUMN email SET NOT NULL;    </code></pre>    <h1 class="blog-sub-title">3. Unique Constraints:</h1>    <p>Unique constraints ensure that the values in a column or a group of columns are unique across all the rows in a table. This is often used for fields like usernames or email addresses to avoid duplication. For instance, in the "customers" table, if you want to ensure that each customer has a unique email address, you can add a unique constraint like this:</p>    <pre><code>ALTER TABLE customersADD CONSTRAINT unique_email UNIQUE (email);    </code></pre>    <h1 class="blog-sub-title">4. Primary Keys:</h1>    <p>A primary key is a combination of unique and not-null constraints. It uniquely identifies each record in a table and ensures data integrity. In the "customers" table, you might have a column named "customer_id" that serves as a primary key:</p>    <pre><code>ALTER TABLE customersADD CONSTRAINT pk_customer_id PRIMARY KEY (customer_id);    </code></pre>    <h1 class="blog-sub-title">5. Foreign Keys:</h1>    <p>Foreign keys establish a relationship between two tables by enforcing referential integrity. They ensure that values in one table's column match values in another table's column. For example, in the "rental" table, if you want to ensure that every rental record references a valid customer, you can add a foreign key constraint like this:</p>    <pre><code>ALTER TABLE rentalADD CONSTRAINT fk_customer_idFOREIGN KEY (customer_id)REFERENCES customers(customer_id);    </code></pre>    <h1 class="blog-sub-title">6. Exclusion Constraints:</h1>    <p>Exclusion constraints ensure that no two rows in a table satisfy a specified predicate. This allows you to define custom constraints beyond simple unique or check constraints. For example, you might have a "bookings" table where you want to ensure that no two bookings for the same room overlap in time:</p>    <pre><code>ALTER TABLE bookingsADD CONSTRAINT exclude_overlapping_bookingsEXCLUDE USING GIST (room_id WITH =, booking_range WITH &&);    </code></pre>    <h1 class="blog-sub-title">Constraints in Navicat</h1>    <p><a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL 16</a> offers an easy-to-use graphical Table Designer for creating and managing PostgreSQL constraints:</p>    <img src="https://www.navicat.com/link/Blog/Image/2024/20240503/Screenshot_Navicat_16_PostgreSQL_Windows_02_ObjectDesign.png" alt="Screenshot_Navicat_16_PostgreSQL_Windows_02_ObjectDesign.png" />    <p>Primary Key constraints are created when you add a key icon to one or more fields by clicking in the Key column. Other constraints are found on their associated tab.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>PostgreSQL provides several different types of constraints to maintain data integrity and enforce business rules. Understanding these constraints and how to use them effectively is essential for designing robust and reliable database schemas.</p>    <p>Looking for an easy-to-use graphical tool for PostgreSQL database development? Navicat 16 For PostgreSQL has got you covered. Click <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">here</a> to download the fully functioning application for a free 14 day trial!</p></body></html>]]></description>
</item>
<item>
<title>Navicat 17: A Sneak Peek</title>
<link>https://www.navicat.com/company/aboutus/blog/2421-navicat-17-a-sneak-peek.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat 17: A Sneak Peek</title></head><body><b>Apr 30, 2024</b> by Robert Gravelle<br/><br/><p>It's official: Navicat 17 (English Edition) is currently in Beta and scheduled to launch on May 13! If you thought that Navicat 16 introduced a lot of exciting new features, you may want to sit down for this. There is so much ground to cover for this upgrade that we can barely get through all of them in one article. Nonetheless, today's blog will at least provide an outline of what to expect come May 13 (for English Edition).</p><h1 class="blog-sub-title">All-New Model Workspace</h1><p>The Model Workspace has been completely redesigned to include enhanced diagram design, a more powerfulsynchronization tool, support of data dictionary, and more.</p><img alt="model_workspace (259K)" src="https://www.navicat.com/link/Blog/Image/2024/20240430/model_workspace.jpg" height="854" width="962" /><h1 class="blog-sub-title">Data Profiling</h1><p>The Data Viewer now integrates a Data Profiling tool that provides a visual and comprehensive view of your data.</p><img alt="data_profile (203K)" src="https://www.navicat.com/link/Blog/Image/2024/20240430/data_profile.jpg" height="853" width="774" /><h1 class="blog-sub-title">Data Dictionary</h1><p>The new Data Dictionary provides documentation and descriptions for each data element within databases across various server platforms. A wizard guides you though every step of the process to create a highly professional finished document:</p><img alt="data_dictionary (97K)" src="https://www.navicat.com/link/Blog/Image/2024/20240430/data_dictionary.jpg" height="720" width="993" /><h1 class="blog-sub-title">Query Pinned Result</h1><p>Clicking the Pin button on any query result retains it for later reference.</p><img alt="query_pinned_result (205K)" src="https://www.navicat.com/link/Blog/Image/2024/20240430/query_pinned_result.jpg" height="851" width="583" /><p>Query results may be just as easily discarded using the Unpin button.</p><h1 class="blog-sub-title">Visual Query Explain</h1><p>Available in MySQL, MariaDB and PostgreSQL, Visual Query Explain can help you gain valuable insights on query implementation in ways that the traditional text Explain simply can't.</p><img alt="visual_explain (113K)" src="https://www.navicat.com/link/Blog/Image/2024/20240430/visual_explain.jpg" height="850" width="767" /><h1 class="blog-sub-title">Table Profile</h1><p>Now you can save different combinations of filters, sort order, and column displays that are frequently used for the table.</p><img alt="table_profile (90K)" src="https://www.navicat.com/link/Blog/Image/2024/20240430/table_profile.jpg" height="392" width="766" /><p>You can also see in the above screenshot that you now have the option to show the data types in the column headers.</p><h1 class="blog-sub-title">Navicat URI</h1><p>This feature allows team members to share and locate server objects with ease.</p><img alt="navicat_uri (108K)" src="https://www.navicat.com/link/Blog/Image/2024/20240430/navicat_uri.jpg" height="549" width="877" /><h1 class="blog-sub-title">Manage Connection</h1><p>Navicat 17 helps you to organize your connections with stars, colors and groups, or even hide them.</p><img alt="manage_connection (160K)" src="https://www.navicat.com/link/Blog/Image/2024/20240430/manage_connection.jpg" height="732" width="902" /><h1 class="blog-sub-title">Business Intelligence (BI) Feature</h1><p>The Business Intelligence (BI) includes a couple of additions:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">  <li>Chart Interaction: All charts on a dashboard can be interconnected.</li>  <li>Calculated Field: Data can now be transformed using specific formulas or expressions.</li> </ul>  <img alt="calculated_field (83K)" src="https://www.navicat.com/link/Blog/Image/2024/20240430/calculated_field.jpg" height="824" width="689" /><h1 class="blog-sub-title">Visual Aggregation Pipeline</h1><p>You can now construct and test your MongoDB aggregation pipelines step-by-step through a clear and responsive UI.</p><h1 class="blog-sub-title">Support for Redis Sentinel</h1><p>Navicat has long supported Redis, the popular open-source in-memory data structure store. Navicat 17 adds support for Redis Sentinel. The high availability solution for Redis, Redis Sentinel provides monitoring, automatic failover, and configuration management for Redis instances, ensuring continuous operation even in the event of failures. </p><h1 class="blog-sub-title">Conclusion</h1><p>This blog covered just some of the exciting new features to expect in Navicat 17 on May 13 (for English Edition). We'll be sure to cover each of them in more detail in future blog instalments!</p></body></html>]]></description>
</item>
<item>
<title>Understanding PostgreSQL Index Types</title>
<link>https://www.navicat.com/company/aboutus/blog/2419-understanding-postgresql-index-types.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>Understanding PostgreSQL Index Types</title></head><body><b>Apr 26, 2024</b> by Robert Gravelle<br/><br/><p>PostgreSQL, the popular open-source relational database management system, offers various index types to optimize query performance and enhance data retrieval efficiency. In this article, we'll learn how to create different types of indexes in PostgreSQL.  Wherever possible, indexes will be applied to the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a> using both DML statements as well as <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL 16</a>.</p><h1 class="blog-sub-title">1. B-Tree Index:</h1><p>The B-Tree index is the default index type in PostgreSQL, suitable for various data types, including text, numeric, and timestamp. It organizes data in a balanced tree structure, facilitating efficient range queries and equality searches. Let's create a B-Tree index on the "customer_id" column in the "payment" table:</p><pre><code>CREATE INDEX btree_customer_id_idx ON payment(customer_id);</code></pre><p>In Navicat you'll find indexes on the "Indexes" tab of the Table Designer. To create the above index, we would enter "btree_customer_id_idx" in the Name field, choose "customer_id" for the "Fields", and select "B-Tree" from the Index method drop-down: </p><img alt="PostgreSQL index types (10K)" src="https://www.navicat.com/link/Blog/Image/2024/20240426/PostgreSQL%20index%20types.jpg" height="156" width="255" /><p>Here is the btree_customer_id_idx index with all of the above fields populated:</p><img alt="btree_customer_id_idx_index (35K)" src="https://www.navicat.com/link/Blog/Image/2024/20240426/btree_customer_id_idx_index.jpg" height="130" width="647" /><p>Clicking the Save button with then create the index.</p><h1 class="blog-sub-title">2. Hash Index:</h1><p>Hash indexes are optimal for equality checks but less effective for range queries. They use hash functions to map keys to index entries. Here's how to create a Hash index on the "film_id" column of the "inventory" table, first using a DML statement:</p><pre><code>CREATE INDEX hash_film_id_idx ON inventory USING HASH(film_id);</code></pre><p>And now with Navicat:</p><img alt="hash_film_id_idx_index (34K)" src="https://www.navicat.com/link/Blog/Image/2024/20240426/hash_film_id_idx_index.jpg" height="128" width="648" /><h1 class="blog-sub-title">3. GiST Index:</h1><p>Generalized Search Tree (GiST) indexes support various data types and complex queries, making them versatile for applications like full-text search and geometric data types.</p><p>Here's an example of creating a GiST index on a geometry column:</p><pre><code>CREATE INDEX index_geometry ON table_name USING GIST (geometry_column);</code></pre><h1 class="blog-sub-title">4. SP-GiST Index:</h1><p>Space-Partitioned Generalized Search Tree (SP-GiST) indexes are suitable for data types with multidimensional or hierarchical structures. They efficiently index non-balanced trees.</p><p>Here's an example of creating an SP-GiST index on a tsvector column:<pre><code>CREATE INDEX index_text_search ON table_name USING SPGIST (tsvector_column);</code></pre><h1 class="blog-sub-title">5. GIN Index:</h1><p>Generalized Inverted Index (GIN) is ideal for cases like full-text search, array types, and composite data types. They are efficient for data types with multiple keys or components. Let's create a GIN index on the "title" column in the "film" table for full-text search:</p><pre><code>CREATE INDEX gin_title_idx ON film USING gin(to_tsvector('english', title));</code></pre><p>Here is the Indexes tab of the "film" table in Navicat with the gin_title_idx index added:</p><img alt="gin_title_idx_index (47K)" src="https://www.navicat.com/link/Blog/Image/2024/20240426/gin_title_idx_index.jpg" height="148" width="745" /><h1 class="blog-sub-title">6. BRIN Index:</h1><p>Block Range Index (BRIN) is suitable for large tables with sorted data, as it indexes ranges of data blocks rather than individual rows. It is efficient for columns with correlation between adjacent values. Here's how to create a BRIN index on the "rental_date" column in the "rental" table:</p><pre><code>CREATE INDEX brin_rental_date_idx ON rental USING brin(rental_date);</code></pre><p>Here is the brin_rental_date_idx index in Navicat:</p><img alt="brin_rental_date_idx_index (39K)" src="https://www.navicat.com/link/Blog/Image/2024/20240426/brin_rental_date_idx_index.jpg" height="128" width="772" /><h1 class="blog-sub-title">Conclusion</h1><p>PostgreSQL offers a range of index types catering to diverse data types and query requirements. Understanding the characteristics of each index type helps database administrators and developers to make informed decisions when optimizing database performance. Meanwhile, using a tool like <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL 16</a> makes working with indexes much easier.</p></body></html>]]></description>
</item>
<item>
<title>Mastering PostgreSQL Rule Syntax</title>
<link>https://www.navicat.com/company/aboutus/blog/2417-mastering-postgresql-rule-syntax.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Mastering PostgreSQL Rule Syntax</title></head><body><b>Apr 19, 2024</b> by Robert Gravelle<br/><br/><p>PostgreSQL rules offer a powerful mechanism for controlling query execution and enforcing data manipulation within the database. Understanding the syntax and usage of rules is essential for harnessing their capabilities effectively. In last week's article, we explored how PostgreSQL rules work and how they differ from triggers. Today's follow-up will cover their syntax in detail with more practical examples using the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">"dvdrental" sample database</a>.</p><h1 class="blog-sub-title">Anatomy of PostgreSQL Rules</h1><p>PostgreSQL rules consist of several key components that define their behavior:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><strong>CREATE RULE Statement</strong>: To create a rule, we use the <code>CREATE RULE</code> statement followed by a rule name and the rule definition.</li>    <li><strong>Rule Event</strong>: Rules can be triggered by various events, including <code>SELECT</code>, <code>INSERT</code>, <code>UPDATE</code>, <code>DELETE</code>, or a combination (<code>ALL</code>).</li>    <li><strong>Rule Action</strong>: The action specifies what should happen when the rule is triggered. It can be an SQL statement such as <code>SELECT</code>, <code>INSERT</code>, <code>UPDATE</code>, <code>DELETE</code>, or a custom action.</li>    <li><strong>Rule Condition</strong>: Conditions are optional and allow rules to be triggered only when certain criteria are met. They are specified using a <code>WHERE</code> clause.</li></ul><h1 class="blog-sub-title">Practical Examples Using the "dvdrental" Sample Database</h1><h3>Example 1: Auditing Inserts</h3><p>Suppose we want to log all insertions into the "customer" table for auditing purposes. First we'll need a table to store the audit data:</p><pre><code>CREATE TABLE customer_audit (    action_type VARCHAR(10),    customer_id INT,    audit_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP);</code></pre><p>We can also create the above table using <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL 16</a>'s Table Designer. Here's what that looks like:</p><img alt="customer_audit_table_design (55K)" src="https://www.navicat.com/link/Blog/Image/2024/20240419/customer_audit_table_design.jpg" height="336" width="662" /><p>Now we'll create a rule that inserts a record into an audit table whenever a new customer is added:</p><pre><code>CREATE RULE log_customer_insert AS    ON INSERT TO customer    DO ALSO        INSERT INTO customer_audit (action_type, customer_id)        VALUES ('INSERT', NEW.customer_id);</code></pre><p>In Navicat, you'll find the rules for a given table on the "Rules" tab of the Table Designer.  Here is the log_customer_insert rule:</p><img alt="log_customer_insert_rule (46K)" src="https://www.navicat.com/link/Blog/Image/2024/20240419/log_customer_insert_rule.jpg" height="231" width="647" /><h3>Example 2: Restricting Updates</h3><p>Let's say we want to prevent updates to the rental return date once it has been set. We can create a rule that blocks any attempts to update the return date column after it has been initially set:</p><pre><code>CREATE RULE prevent_return_date_update AS    ON UPDATE TO rental    WHERE OLD.return_date IS NOT NULL AND NEW.return_date IS DISTINCT FROM OLD.return_date    DO INSTEAD NOTHING;</code></pre><p>Here is the prevent_return_date_update rule in Navicat:</p><img alt="prevent_return_date_update_rule (53K)" src="https://www.navicat.com/link/Blog/Image/2024/20240419/prevent_return_date_update_rule.jpg" height="236" width="644" /><p>You may recognize the enforce_min_rental_duration rule from <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2414-understanding-postgresql-rules.html" target="_blank">last week's article.</a></p><h3>Example 3: Data Transformation</h3><p>Suppose we want to transform the format of phone numbers stored in the "address" table from international format to local format. We can create a rule that automatically updates phone numbers whenever a new address is inserted:</p><pre><code>CREATE RULE transform_phone_number AS    ON INSERT TO address    DO ALSO        UPDATE address        SET phone = '+1-' || SUBSTRING(phone FROM 3)        WHERE address_id = NEW.address_id;</code></pre><p>Need more space to enter the complete Where or Definition statement? Clicking the ellipsis [...] button beside the text box opens a large text area where you can view and compose the full statement. Here is the transform_phone_number rule in Navicat that shows the full Definition:</p><img alt="transform_phone_number_rule (63K)" src="https://www.navicat.com/link/Blog/Image/2024/20240419/transform_phone_number_rule.jpg" height="550" width="645" /><h1 class="blog-sub-title">Conclusion</h1><p>PostgreSQL rules offer a versatile toolset for implementing complex logic and enforcing data integrity within the database. By exploring diverse examples like auditing inserts, restricting updates, and data transformation, developers can gain a deeper understanding of how rules can be applied to address various requirements effectively. With PostgreSQL's flexible rule system, developers can tailor database behavior to meet specific business needs while ensuring data consistency and reliability.</p></body></html>]]></description>
</item>
<item>
<title>Understanding PostgreSQL Rules</title>
<link>https://www.navicat.com/company/aboutus/blog/2414-understanding-postgresql-rules.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <meta charset="UTF-8">    <meta name="viewport" content="width=device-width, initial-scale=1.0">    <title>Understanding PostgreSQL Rules</title></head><body><b>Apr 11, 2024</b> by Robert Gravelle<br/><br/><p>PostgreSQL, a powerful open-source relational database management system, offers various features to enhance data management and manipulation. Among these features are rules, a mechanism used to control how queries and commands are processed within the database. In this article, we will explore how PostgreSQL rules work and how they differ from triggers, with a practical example using the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">DVD Rental Database</a>.</p><h1 class="blog-sub-title">What are PostgreSQL Rules?</h1><p>PostgreSQL rules provide a way to rewrite queries or commands before they are executed. They act as a set of predefined actions to be performed automatically based on certain conditions. Rules are primarily used to implement data abstraction and customization without altering the underlying schema.</p><p>Furthermore, PostgreSQL rules offer a powerful mechanism for enforcing business logic within the database itself, reducing the need for application-level constraints and ensuring consistent data manipulation across different applications or interfaces. By encapsulating complex logic within the database, rules promote data integrity and maintainability while simplifying the development process.</p><h1 class="blog-sub-title">How do Rules Differ from Triggers?</h1><p>While rules and triggers serve similar purposes in PostgreSQL, there are notable differences between the two.</p><ol>    <li><strong>Execution Time</strong>:        <ul>            <li>Rules: Rules are applied during query parsing, meaning they affect the query plan generation.</li>            <li>Triggers: Triggers are executed after the completion of an event such as INSERT, UPDATE, or DELETE.</li>        </ul> <br/>    </li>    <li><strong>Visibility</strong>:        <ul>            <li>Rules: Rules are transparent to users executing queries. The rewritten query is visible in the query execution plan.</li>            <li>Triggers: Triggers are explicitly defined on tables and are triggered by specific events.</li>        </ul> <br/>    </li>    <li><strong>Granularity</strong>:        <ul>            <li>Rules: Rules can be applied at the table level or view level, providing more flexibility in customization.</li>            <li>Triggers: Triggers are bound to specific tables and cannot be applied globally.</li>        </ul> <br/>    </li>    <li><strong>Complexity</strong>:        <ul>            <li>Rules: Rules can be complex and may involve multiple actions or conditions.</li>            <li>Triggers: Triggers are simpler to implement and manage as they are event-driven.</li>        </ul>    </li></ol><h1 class="blog-sub-title">Practical Example Using the "dvdrental" Sample Database: Enforcing Data Validation</h1><p>Let's explore a  practical example to understand how PostgreSQL rules work in conjunction with the "dvdrental" sample database.</p><p>Suppose we want to enforce a constraint where rental durations must be at least one day. We can achieve this using a rule:</p><pre><code>CREATE RULE enforce_min_rental_duration AS    ON INSERT TO rental    WHERE (NEW.return_date - NEW.rental_date) &lt; INTERVAL '1 day'    DO INSTEAD NOTHING;</code></pre><p>In Navicat we can add a rule in the "Rules" tab of the Table Designer. The "Do instead" drop-down lets us choose between "INSTEAD" and "ALSO".  Meanwhile, the "Where" textbox accepts the criteria for executing the rule and the "Definition" box describes what the rule should do.  Here is the complete rule definition in Navicat:</p><img alt="enforce_min_rental_duration_rule (49K)" src="https://www.navicat.com/link/Blog/Image/2024/20240410/enforce_min_rental_duration_rule.jpg" height="248" width="679" /><p>This rule ensures that any attempt to insert a rental with a duration less than one day is prevented.</p><h1 class="blog-sub-title">Conclusion</h1><p>PostgreSQL rules are a powerful tool for controlling query execution and enforcing data integrity. While similar to triggers, they offer distinct advantages in terms of execution time, visibility, granularity, and complexity. By understanding the differences between rules and triggers and leveraging their capabilities, developers can effectively customize database behavior to meet specific requirements while maintaining data integrity and security.</p><p>Interested in giving Navicat 16 For PostgreSQL a try?  You can download the fully functioning application <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">here</a> to get a free 14 day trial!</p></body></html>]]></description>
</item>
<item>
<title>Ensuring Data Integrity in PostgreSQL with Check Constraints</title>
<link>https://www.navicat.com/company/aboutus/blog/2412-ensuring-data-integrity-in-postgresql-with-check-constraints.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <meta name="viewport" content="width=device-width, initial-scale=1.0">    <title>Ensuring Data Integrity in PostgreSQL with Check Constraints</title></head><body><b>Mar 25, 2024</b> by Robert Gravelle<br/><br/>    <p>Data integrity is a critical aspect of any database system, ensuring that the data stored remains accurate, consistent, and meaningful. In PostgreSQL, one powerful tool for maintaining data integrity is the use of check constraints. These constraints allow you to define rules that data must adhere to, preventing the insertion or modification of invalid data. In this article, we'll explore how to use check constraints to validate data in PostgreSQL, using the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">DVD Rental Database</a> as a reference.</p>    <h1 class="blog-sub-title">Understanding Check Constraints</h1>    <p>Check constraints are rules that limit the values that can be entered into a column or set of columns in a table. These rules are enforced by the database system, preventing the insertion or modification of rows that violate the specified conditions. Check constraints are defined using the <code>CHECK</code> keyword followed by an expression that evaluates to a Boolean value.</p>    <h1 class="blog-sub-title">Validating Rental Durations</h1>    <p>Let's consider a scenario using a modified version of the "rental" table in the "dvdrental" database that contains a "rental_duration" column. The table definition might appear as follows in the <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat</a> Table Designer:</p>    <img alt="rentals_with_rental_period_table_definition (48K)" src="https://www.navicat.com/link/Blog/Image/2024/20240325/rentals_with_rental_period_table_definition.jpg" height="187" width="653" /><p>Now, suppose we want to ensure that the duration of a rental is always greater than zero days. We can achieve this by adding a check constraint to the "rentals_with_rental_period" table as follows:</p>    <pre><code>ALTER TABLE rentals_with_rental_periodADD CONSTRAINT rental_duration_checkCHECK (rental_duration &gt; 0);    </code></pre>    <p>In Navicat we can add a check constraint in the "Checks" tab of the Table Designer. We just need to supply an expression and optional name.  Navicat will create a unique name for us if we don't supply one! </p>        <img alt="rental_duration_check_in_navicat (28K)" src="https://www.navicat.com/link/Blog/Image/2024/20240325/rental_duration_check_in_navicat.jpg" height="127" width="646" />        <p>Upon hitting the Save button, Navicat will either create the check constraint or show an error message if any rows violate the constraint.</p>        <p>With this constraint in place, any attempt to insert or update a row in the "rentals_with_rental_period" table where the rental duration is less than or equal to zero will result in an error, ensuring that only valid rental durations are allowed.</p>    <h1 class="blog-sub-title">Enforcing Valid Ratings</h1>    <p>Another example from the "film" table in the "dvdrental" database involves validating film ratings. Suppose we want to restrict the ratings to only certain values, such as 'G', 'PG', 'PG-13', 'R', or 'NC-17'. We can achieve this with a check constraint:</p>    <pre><code>ALTER TABLE filmADD CONSTRAINT film_rating_checkCHECK (rating IN ('G', 'PG', 'PG-13', 'R', 'NC-17'));    </code></pre>    <p>Here is the same constraint in the Navicat Table Designer:</p>        <img alt="film_rating_check_in_navicat (34K)" src="https://www.navicat.com/link/Blog/Image/2024/20240325/film_rating_check_in_navicat.jpg" height="126" width="650" />        <p>Now, any attempt to insert or update a row in the "film" table with a rating that is not one of the specified values will be rejected, ensuring that only valid ratings are allowed.</p>    <h1 class="blog-sub-title">Handling NULL Values</h1>    <p>It's important to note that check constraints are not applied to rows where one or more columns contain a <code>NULL</code> value unless the constraint specifically includes a condition to check for <code>NULL</code>. For example, to enforce that the "rental_rate" column in the "film" table is always greater than zero and not <code>NULL</code>, we would use the following constraint:</p>    <pre><code>ALTER TABLE filmADD CONSTRAINT film_rental_rate_checkCHECK (rental_rate &gt; 0 AND rental_rate IS NOT NULL);    </code></pre>    <p>Here is the same constraint in the Navicat Table Designer:</p>        <img alt="film_rental_rate_check_in_navicat (43K)" src="https://www.navicat.com/link/Blog/Image/2024/20240325/film_rental_rate_check_in_navicat.jpg" height="146" width="681" />        <h1 class="blog-sub-title">Conclusion</h1>    <p>Check constraints are a powerful tool for ensuring data integrity in PostgreSQL. By defining rules that data must adhere to, you can prevent the insertion or modification of invalid data, helping to maintain the accuracy and consistency of your database. By incorporating them into your database design, you can build robust and reliable data systems that meet the needs of your organization.</p></body></html>]]></description>
</item>
<item>
<title>Exploring PostgreSQL's Foreign Data Wrapper and Statistical Functions</title>
<link>https://www.navicat.com/company/aboutus/blog/2409-exploring-postgresql-s-foreign-data-wrapper-and-statistical-functions.html</link>
<description><![CDATA[<!DOCTYPE html><html ><head>    <title>Exploring PostgreSQL's Foreign Data Wrapper and Statistical Functions</title></head><body>    <b>Mar 15, 2024</b> by Robert Gravelle<br/><br/>    <p>        PostgreSQL, renowned for its robustness and extensibility, offers several helpful functions for both developers and database administrators alike. Among these functions, <code>file_fdw_handler</code>, <code>file_fdw_validator</code>, <code>pg_stat_statements</code>, <code>pg_stat_statements_info</code>, and <code>pg_stat_statements_reset</code> stand out as invaluable tools for enhancing database management and performance optimization. In today's blog we'll learn how to use all of these functions as well as how Navicat can help!    </p>    <h1 class="blog-sub-title">File Functions</h1>    <p>        PostgreSQL's Foreign Data Wrapper (FDW) functionality allows seamless integration of external data sources into the database. The <code>file_fdw_handler</code> and <code>file_fdw_validator</code> functions are specifically designed to handle foreign tables backed by files.    </p>    <p>        The <code>file_fdw_handler</code> function serves as an interface between PostgreSQL and the foreign data source, enabling the execution of SQL queries against files residing outside the database. Let's consider an example where we want to create a foreign table named <code>external_data</code> referencing a CSV file named <code>data.csv</code>:    </p>    <pre><code>        CREATE SERVER file_server FOREIGN DATA WRAPPER file_fdw;        CREATE FOREIGN TABLE external_data (            id INT,            name TEXT,            age INT        ) SERVER file_server OPTIONS (filename '/path/to/data.csv');    </code></pre>    <p>        Meanwhile, the <code>file_fdw_validator</code> function ensures the integrity of the options provided when creating a foreign table. It validates if the specified file exists and is accessible. For instance:    </p>    <pre><code>        SELECT file_fdw_validator('filename', '/path/to/data.csv');    </code></pre>    <h1 class="blog-sub-title">Statistical Functions</h1>    <p>        PostgreSQL's pg_stat_statements module provides a set of built-in functions for monitoring and analyzing query performance. Among these, <code>pg_stat_statements</code>, <code>pg_stat_statements_info</code>, and <code>pg_stat_statements_reset</code> are indispensable for identifying bottlenecks and optimizing database performance.    </p>    <p>        <code>pg_stat_statements</code> is a module that records statistics about SQL statements executed by a server. It tracks details such as execution counts, total runtime, and resource usage for each unique query. To enable <code>pg_stat_statements</code>, you need to add it to the <code>shared_preload_libraries</code> configuration parameter in <code>postgresql.conf</code>:<pre><code>shared_preload_libraries = 'pg_stat_statements'</code></pre> <p>After restarting the PostgreSQL server, you can query the statistics using:</p>    <pre><code>        SELECT * FROM pg_stat_statements;    </code></pre>    <p>        <code>pg_stat_statements_info</code> provides additional information about the <code>pg_stat_statements</code> module, such as the version number and the last reset time. It can be queried as follows:    </p>    <pre><code>        SELECT * FROM pg_stat_statements_info;    </code></pre>    <p>        Finally, <code>pg_stat_statements_reset</code> resets the statistics collected by <code>pg_stat_statements</code>, allowing you to start afresh with performance monitoring. Simply execute:    </p>    <pre><code>        SELECT pg_stat_statements_reset();    </code></pre>    <h1 class="blog-sub-title">Working with PostgreSQL's Built-in Functions in Navicat</h1>    <p>We can access all of the above functions in <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL</a> or <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat Premium</a> 16 by expanding the "Functions" section in the Objects Pane:</p>        <img alt="PostgreSQL_functions_in_Navicat (113K)" src="https://www.navicat.com/link/Blog/Image/2024/20240315/PostgreSQL_functions_in_Navicat.jpg" height="456" width="1093" />        <p>To execute a function, simply select it from the Objects list and click the Execute Function button:</p>        <img alt="execute_function_button (62K)" src="https://www.navicat.com/link/Blog/Image/2024/20240315/execute_function_button.jpg" height="244" width="625" />        <p>That will bring up a dialog where you can supply input parameter values:</p>        <img alt="input_parameter_dialog (33K)" src="https://www.navicat.com/link/Blog/Image/2024/20240315/input_parameter_dialog.jpg" height="274" width="453" />        <p>Click the OK button to execute the function and view the results (or Cancel to abort):</p>        <img alt="pg_stat_statements_results (330K)" src="https://www.navicat.com/link/Blog/Image/2024/20240315/pg_stat_statements_results.jpg" height="605" width="679" />        <p>        PostgreSQL's built-in functions, including <code>file_fdw_handler</code>, <code>file_fdw_validator</code>, <code>pg_stat_statements</code>, <code>pg_stat_statements_info</code>, and <code>pg_stat_statements_reset</code>, play a pivotal role in enhancing database management and optimizing query performance. By leveraging these functions effectively, developers and administrators can streamline operations and ensure optimal utilization of PostgreSQL's capabilities.</p></body></html>]]></description>
</item>
<item>
<title>Exploring Advanced PostgreSQL Data Types - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/2407-exploring-advanced-postgresql-data-types-part-2.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Exploring Advanced PostgreSQL Data Types: Part 2</title></head><body><b>Mar 8, 2024</b> by Robert Gravelle<br/><br/>    <h1 class="blog-sub-title">Range Types</h1>    <p>Range types offer a concise way to represent a range of values within a single database field. They find application in various domains, from temporal data to numeric intervals. In this blog article, we'll be delving into their usage (and benefits!) using both DML/SQL statements and <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL 16</a>.</p>    <h1 class="blog-sub-title">Understanding Range Types</h1>    <p>In PostgreSQL, range types allow for the representation of continuous ranges of values. These ranges can be of different data types such as numeric, date, or timestamp. For example, a range might represent a period of time, a set of temperatures, or a range of product prices.</p>    <h1 class="blog-sub-title">Practical Example: Tracking Rental Durations</h1>    <p>Let's consider a scenario where we want to track the duration of rentals in the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">dvdrental sample database</a>. We can utilize range types to store rental durations efficiently. Here are the statements to create and populate the new "rentals_with_rental_period" table:</p>    <pre><code>CREATE TABLE rentals_with_rental_period (    rental_id SERIAL PRIMARY KEY,    customer_id INT,    rental_duration INT,    rental_period DATERANGE);INSERT INTO rentals_with_rental_period (customer_id, rental_duration, rental_period)VALUES(1, 7, '[2024-02-01, 2024-02-08]'),(2, 5, '[2024-01-15, 2024-01-20]');    </code></pre>    <p>In Navicat, we can create our table using the Table Designer:</p>        <img alt="rentals_with_rental_period_in_table_designer (59K)" src="https://www.navicat.com/link/Blog/Image/2024/20240309/rentals_with_rental_period_in_table_designer.jpg" height="250" width="664" />        <p>After creating the table, we can add data to it. Be sure to prefix the Range values with a square bracket "[" and end them with a parenthesis ")". That tells Navicat that the values belong to a range:</p>        <img alt="rentals_with_rental_period_table (24K)" src="https://www.navicat.com/link/Blog/Image/2024/20240309/rentals_with_rental_period_table.jpg" height="118" width="463" />    <p>In this example, the "rental_period" column stores ranges representing the start and end dates of each rental. We can easily query rentals that include a specific date using the <code>@></code> operator:</p>    <img alt="range_query (40K)" src="https://www.navicat.com/link/Blog/Image/2024/20240309/range_query.jpg" height="239" width="549" />    <h1 class="blog-sub-title">Expanding Applications: Numeric Intervals</h1>    <p>Range types are not limited to temporal data. They can also be used to represent numeric intervals. For instance, imagine a scenario where a product's price can vary within a specific range based on quantity purchased. We can use range types to model this effectively:</p>    <pre><code>CREATE TABLE product_price (    product_id SERIAL PRIMARY KEY,    price_range NUMRANGE);INSERT INTO product_price (price_range)VALUES('[10.00, 20.00)'),('[20.00, 30.00)'),('[30.00, )');    </code></pre>    <p>In this example, the "price_range" column stores ranges representing the minimum and maximum prices for each product. We can query products within a specific price range using the <code>@></code> operator:</p>    <pre><code>SELECT * FROM product_priceWHERE price_range @> 25.00;    </code></pre>        <h1 class="blog-sub-title">Conclusion</h1>    <p>Range types in PostgreSQL offer a powerful way to represent and query continuous ranges of values. Whether dealing with temporal data, numeric intervals, or other continuous values, range types provide a concise and efficient solution. By leveraging range types, developers can enhance the expressiveness and flexibility of their database schemas, paving the way for more sophisticated applications.</p><p>Looking for an easy-to-use graphical tool for PostgreSQL database development? Navicat 16 For PostgreSQL has got you covered.  Click <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">here</a> to download the fully functioning application for a free 14 day trial!</p></body></html>]]></description>
</item>
<item>
<title>Exploring Advanced PostgreSQL Data Types - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/2405-exploring-advanced-postgresql-data-types-part-1.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>Exploring Advanced PostgreSQL Data Types: Arrays and Enums</title></head><body>  <b>Mar 1, 2024</b> by Robert Gravelle<br/><br/>    <h1 class="blog-sub-title">Arrays and Enums</h1>    <p>PostgreSQL, renowned for its extensibility and versatility, offers several data types beyond the conventional integer and string. Among these are the array and enum, which empower developers with advanced data modeling capabilities. In this blog article, we'll be delving into these sophisticated data types, demonstrating their usage and benefits within the context of the free <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">dvdrental sample database</a>.</p>    <h1 class="blog-sub-title">The Array Type</h1>    <p>Arrays in PostgreSQL enable the storage of multiple values within a single database field. This capability proves invaluable in scenarios where dealing with lists or sets of data is essential. Let's consider a practical example.  Suppose we want to store films along with the actors who appeared each film. We can utilize the array data type to achieve this efficiently. First, here are the statements to create and populate the new "films_with_actors" table:</p>    <pre><code>CREATE TABLE films_with_actors (    film_id SERIAL PRIMARY KEY,    title VARCHAR(255),    actors TEXT[]);INSERT INTO films_with_actors (title, actors) VALUES('Inception', ARRAY['Leonardo DiCaprio', 'Joseph Gordon-Levitt']),('The Shawshank Redemption', ARRAY['Tim Robbins', 'Morgan Freeman']);    </code></pre>        <p>In Navicat, we can create our table using the Table Designer:</p>        <img alt="films_with_actors_table_design (57K)" src="https://www.navicat.com/link/Blog/Image/2024/20240301/films_with_actors_table_design.jpg" height="343" width="810" />        <p>Note that if we append square brackets "[]" to the text type Navicat will recognize it an Array type and add a "1" to the Dimension field upon saving the table, indicating that it is a one-dimensional array.</p>        <p>After creating the table, we will be able to add data to it. Be sure to enclose the Array values in curly braces "{}" to tell Navicat which values to include within each array:</p>        <img alt="films_with_actors_table_with_data (24K)" src="https://www.navicat.com/link/Blog/Image/2024/20240301/films_with_actors_table_with_data.jpg" height="92" width="547" />        <p>In queries, we can refer to a specific Array element by appending the desired index within square brackets.  Hence, "actors[1]" would fetch the first Array value:</p>        <img alt="selecting_array_values (39K)" src="https://www.navicat.com/link/Blog/Image/2024/20240301/selecting_array_values.jpg" height="254" width="563" />    <h1 class="blog-sub-title">The Enum Type</h1>    <p>Short for "Enumerated", the Enum type allows developers to define a fixed set of possible values for a column. This enhances data integrity and clarity within the database schema. Let's illustrate this by adding a "rating" column to the "films_with_actors" table. We can define an enumerated type for movie ratings using the following DDL statement:</p>    <pre><code>CREATE TYPE rating AS ENUM ('G', 'PG', 'PG-13', 'R', 'NC-17');ALTER TABLE films_with_actors ADD COLUMN rating rating;    </code></pre>       <p>In Navicat, we can append the new column in the Table Designer by clicking the "Add Field" button above the column list. After we've created the rating Enum using the CREATE TYPE statement above, we can choose it by selecting the "(Type)" item from the Type drop-down and then choosing the rating item from the Object Type list:</p>      <img alt="rating_column (61K)" src="https://www.navicat.com/link/Blog/Image/2024/20240301/rating_column.jpg" height="383" width="672" />        <p>The table rating column will now include a drop-down with our defined Enum values:</p>        <img alt="rating_column_in_grid_view (32K)" src="https://www.navicat.com/link/Blog/Image/2024/20240301/rating_column_in_grid_view.jpg" height="150" width="635" />        <h1 class="blog-sub-title">Conclusion</h1>    <p>PostgreSQL's array and enum data types provide developers with powerful tools to model complex data structures efficiently. By leveraging these advanced features, developers can enhance data integrity, streamline queries, and build more robust database schemas. In next week's blog, we'll conclude our exploration of PostgreSQL's advanced data types with a look at the Range type. Offering a concise way to represent a range of values within a single database field, the Range type is highly useful in various domains, from temporal data to numeric intervals. </p><p>Looking for an easy-to-use graphical tool for PostgreSQL database development? Navicat 16 For PostgreSQL has got you covered.  Click <a class="default-links"  href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">here</a> to download the fully functioning application for a free 14 day trial!</p></body>]]></description>
</item>
<item>
<title>Why Choose PostgreSQL for Your Next IT Project</title>
<link>https://www.navicat.com/company/aboutus/blog/2403-why-choose-postgresql-for-your-next-it-project.html</link>
<description><![CDATA[<!DOCTYPE html><html>  <head>    <title>Why Choose PostgreSQL for Your Next IT Project</title></head><body><b>Feb 23, 2024</b> by Robert Gravelle<br/><br/>    <p>In the dynamic landscape of database management systems, selecting the right platform for your project is a crucial        decision. With an array of options available, each catering to specific needs, making a choice can be a daunting task. This blog will outline a few reasons why PostgreSQL may just be the relational database solution you're looking for.</p>    <h1 class="blog-sub-title">Open Source Advantage</h1>    <p>At the core of PostgreSQL's appeal is its open-source nature. Open-source databases offer a cost-effective        solution without compromising on features and performance. With PostgreSQL, you benefit from a vibrant community        of developers constantly improving and refining the system. This collaborative effort ensures that the database        stays up-to-date with the latest technological advancements and security measures.</p>    <h1 class="blog-sub-title">Extensibility and Customization</h1>    <p>PostgreSQL's extensibility sets it apart from many of its counterparts. It allows users to define their own data types,        operators, and functions, giving developers a high degree of flexibility in tailoring the database to their        specific project requirements. This extensibility is a boon for projects with unique data storage and processing        needs.</p>    <h1 class="blog-sub-title">Advanced Data Types and Features</h1>    <p>PostgreSQL supports a wide range of advanced data types, including arrays, hstore (key-value pairs), and JSON.        Its support for complex data structures makes it an ideal choice for projects that demand versatility and        adaptability in handling diverse data formats. Additionally, features like full-text search, geospatial support,        and advanced indexing mechanisms enhance its capability to manage complex datasets efficiently.</p>    <h1 class="blog-sub-title">ACID Compliance</h1>    <p>PostgreSQL adheres strictly to the ACID (Atomicity, Consistency, Isolation, Durability) principles, ensuring        transactional integrity even in the most demanding environments. This level of reliability is crucial for        applications where data consistency and accuracy are non-negotiable, such as financial systems or healthcare        applications.</p>    <h1 class="blog-sub-title">Performance Tuning and Optimization</h1>    <p>PostgreSQL provides a wide assortment of performance tuning options, allowing developers to optimize the database for        specific workloads. Its query optimizer is renowned for its efficiency, and administrators can fine-tune various        parameters to achieve optimal performance tailored to their specific requirements. Whether handling large-scale data        warehousing or real-time analytics, PostgreSQL can be fine-tuned to deliver exceptional speed and        responsiveness.</p>    <h1 class="blog-sub-title">Robust Community Support</h1>    <p>The PostgreSQL community is one of the most active and supportive in the open-source database realm. With a vast        pool of experienced developers, administrators, and contributors, users can easily find solutions to        challenges, share best practices, and stay updated on the latest developments. The community-driven approach        ensures a wealth of resources, including documentation, forums, and third-party tools, contributing to a smooth        development and maintenance process.</p>    <h1 class="blog-sub-title">Scalability</h1>    <p>Scalability is a critical factor for any growing project. PostgreSQL excels in that department, supporting both        vertical and horizontal scaling. Whether your project demands a single-node deployment or a distributed        architecture, PostgreSQL can seamlessly adapt to varying workloads and data volumes, ensuring your database can        evolve alongside your project's needs.</p>    <h1 class="blog-sub-title">Conclusion</h1>    <p>In today's blog, we explored a few reasons why PostgreSQL may be the right relational database solution for your next IT project.        Its open-source nature, extensibility, advanced features, ACID compliance, performance tuning capabilities,        robust community support, and scalability make it an ideal solution for projects ranging from small-scale        applications to large enterprise systems. </p>            <p>Looking for an easy-to-use graphical tool for PostgreSQL database development? Navicat 16 For PostgreSQL has got you covered.  Click <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">here</a> to download the fully functioning application for a free 14 day trial!</p></body></body></html>]]></description>
</item>
<item>
<title>Working with PostgreSQL Materialized Views</title>
<link>https://www.navicat.com/company/aboutus/blog/2401-working-with-postgresql-materialized-views.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Working with PostgreSQL Materialized Views</title></head><body><b>Feb 16, 2024</b> by Robert Gravelle<br/><br/><p>Last week's tutorial guided us through the creation of Materialized Views in PostgreSQL, using the <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">DVD Rental Database</a> as a practical example. As we learned there, PostgreSQL Materialized Views provide a powerful mechanism to enhance query performance by precomputing and storing the result set of a query as a physical table. Today's follow-up will cover other pertinent Materialized View operations such as refreshing a view, executing queries against it, as well as deleting a view should you no longer require it. As with the last blog article, we'll go over both DML statements as well as how to achieve the same result via the <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat</a> GUI. </p>    <h1 class="blog-sub-title">Refreshing a Materialized View</h1>        <p>The data in a Materialized View needs to be refreshed periodically to reflect any changes in the underlying tables. You can use the following command to refresh the Materialized View:</p>        <pre><code>REFRESH MATERIALIZED VIEW mv_category_revenue;</code></pre>                <p>In Navicat, we can refresh and completely replace the contents of a materialized view by right-clicking (or control-click on macOS) it in the Objects tab and selecting "Refresh Materialized View With" -> "Data" or "No Data" from the pop-up menu:</p>        <img alt="refresh_materialized_view (53K)" src="https://www.navicat.com/link/Blog/Image/2024/20240216/refresh_materialized_view.jpg" height="362" width="655" />    <h1 class="blog-sub-title">Querying a Materialized View</h1>    <p>Now that we have our Materialized View, we can query it just like any other table:</p>    <pre><code>SELECT * FROM mv_category_revenue;</code></pre>    <p>This query will return the film categories along with their total revenue, providing a quick and efficient way to retrieve this information without repeatedly joining multiple tables.</p>    <p>In Navicat, you can write a query in the Query Editor or using the Query Builder tool. In the case of the Query Editor, the autocomplete feature will recognize the Materialized View with just a couple of key strokes!</p>        <img alt="materialized_view_in_autocomplete_list (62K)" src="https://www.navicat.com/link/Blog/Image/2024/20240216/materialized_view_in_autocomplete_list.jpg" height="304" width="552" />        <p>Materialized Views are also included in the Object pane of the Query Builder. You can add a Materialized View to the query by dragging it from the Object pane to the Diagram pane or by double-clicking it on the Object pane:</p>        <img alt="materialized_view_query (107K)" src="https://www.navicat.com/link/Blog/Image/2024/20240216/materialized_view_query.jpg" height="671" width="547" />        <h1 class="blog-sub-title">Deleting a Materialized View</h1>    <p>Should you no longer require a Materialized View, you can delete it using the DROP MATERIALIZED VIEW command.  Here's the statement to drop the mv_category_revenue view:</p>        <pre><code>DROP MATERIALIZED VIEW mv_category_revenue;</code></pre>        <p>There are a couple of ways to delete a Materialized View in Navicat.  The first is to:</p>    <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">     <li>Select "Materialized View" in the Main Window's toolbar.</li>      <li> Then select the Materialized View that you want to delete in the Objects list. That will enable several buttons on the Objects toolbar, including the Delete Materialized View button:          <p><img alt="delete_materialized_view_button (63K)" src="https://www.navicat.com/link/Blog/Image/2024/20240216/delete_materialized_view_button.jpg" height="303" width="837" /></p>    </li>    <li>Clicking the Delete Materialized View button will present a dialog prompt where you can confirm that you really do wish to delete the selected Materialized View.</li>    </ul>    <p>The second way to delete a Materialized View in Navicat is to right-click it (or control-click on macOS) in either the Main Window's Navigation pane or Objects list and select "Delete Materialized View" from the context menu:</p>     <img alt="delete_materialized_view_menu_command (40K)" src="https://www.navicat.com/link/Blog/Image/2024/20240216/delete_materialized_view_menu_command.jpg" height="302" width="494" /><h1 class="blog-sub-title">Conclusion</h1><p>In this tutorial, we learned how to execute some pertinent Materialized View operations including refreshing a view, executing queries against it, as well as deleting a view. In each case, we covered both DML statements as well as how to achieve the same result via the <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat</a> GUI. </p></body></html>]]></description>
</item>
<item>
<title>Introduction to PostgreSQL Materialized Views</title>
<link>https://www.navicat.com/company/aboutus/blog/2399-introduction-to-postgresql-materialized-views.html</link>
<description><![CDATA[<!DOCTYPE html><html lang="en"><head>    <title>Introduction to PostgreSQL Materialized Views</title></head><body><b>Feb 8, 2024</b> by Robert Gravelle<br/><br/>    <p>PostgreSQL Materialized Views provide a powerful mechanism to enhance query performance by precomputing and storing the result set of a query as a physical table. This tutorial will guide you through the creation of Materialized Views in PostgreSQL, using the <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">DVD Rental Database</a> as a practical example.</p>    <h1 class="blog-sub-title">Understanding Materialized Views</h1>    <p>A Materialized View is a snapshot of a query's result set that is stored as a physical table. Unlike regular views, which are virtual and execute the underlying query every time they are referenced, Materialized Views persist the data, allowing for faster query performance at the cost of periodic refreshes.</p>    <p>Materialized Views are particularly useful in scenarios where the underlying data changes infrequently compared to the frequency of query executions. This makes them ideal for scenarios such as reporting, data warehousing, and situations where real-time data is not a strict requirement.</p>    <h1 class="blog-sub-title">Setting Up the DVD Rental Database</h1>    <p>Before we dive into Materialized Views, let's set up the DVD Rental Database. It's PostgreSQL's version of the popular Sakila Sample Database for MySQL. You can download the DVD Rental Database from the official PostgreSQL tutorial page (<a class="default-links" href="https://www.postgresqltutorial.com/postgresql-sample-database/" target="_blank">PostgreSQL Sample Database</a>).</p>    <p>The database file is in ZIP format (dvdrental.zip) so you need to extract it to dvdrental.tar before loading the sample database into the PostgreSQL database server. Once you have extracted the .tar file, create a new database called "dvdrental" and execute the pg_restore command to populate the dvdrental database from the contents of the .tar file:</p>    <pre>        <code>pg_restore -U postgres -d dvdrental D:\sampledb\postgres\dvdrental.tar</code>    </pre>    <p>Replace the path above with the one that points to the extracted dvdrental.tar on your system.</p>    <p>You can find the detailed installation instructions <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/load-postgresql-sample-database/" target="_blank">here</a>.</p>        <h1 class="blog-sub-title">Creating a Materialized View</h1>    <p>Let's say we want to create a Materialized View that shows the total revenue generated by each film category. Here's a step-by-step guide:</p>    <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">        <li>Connect to your PostgreSQL database</li>        <li>Create the Materialized View using the following DML statement:</li>        <pre><code>CREATE MATERIALIZED VIEW mv_category_revenue ASSELECT    c.name AS category,    SUM(p.amount) AS total_revenueFROM    category c    JOIN film_category fc ON c.category_id = fc.category_id    JOIN film f ON fc.film_id = f.film_id    JOIN inventory i ON f.film_id = i.film_id    JOIN rental r ON i.inventory_id = r.inventory_id    JOIN payment p ON r.rental_id = p.rental_idGROUP BY    c.name;</code></pre>        <p>In this example, we join multiple tables from the DVD Rental Database to calculate the total revenue for each film category.</p>                    <p>In <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat For PostgreSQL</a> (or <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium</a>) 16:</p>         <ul style="margin-left: 24px; line-height: 24px;">          <li type="I">Click the "Materialized View" button to show the Materialized View Object List and then click on "+ New Materialized View" in the Objects toolbar to open the View Designer:            <p><img alt="materialized_view_buttons (57K)" src="https://www.navicat.com/link/Blog/Image/2024/20240208/materialized_view_buttons.jpg" height="391" width="708" /> </p>          </li>            <li type="I">Enter the SELECT portion of the above statement into the Definition editor:          <p><img alt="materialized_view_select_statement (51K)" src="https://www.navicat.com/link/Blog/Image/2024/20240208/materialized_view_select_statement.jpg" height="265" width="475" /></p></li>          <li type="I">We can click the Preview button to verify that our statement works as expected:          <p><img alt="materialized_view_preview (89K)" src="https://www.navicat.com/link/Blog/Image/2024/20240208/materialized_view_preview.jpg" height="653" width="475" /></p></li>              <li type="I">To create the new Materialized View, click the Save button.  A dialog will appear prompting for the Materialized View Name. Let's call it "mv_category_revenue" just as we did in the above CREATE MATERIALIZED VIEW statement above:          <p><img alt="materialized_view_name (85K)" src="https://www.navicat.com/link/Blog/Image/2024/20240208/materialized_view_name.jpg" height="466" width="649" /></p>          </li>          <li type="I">Upon clicking the dialog Save button, Navicat will change the new materialized view name from "untitled" to the one we provided. It will also add our new materialized view to the Materialized Views in the left-hand Navigation Pane:          <p><img alt="materialized_view_in_database_Navigation_Pane (96K)" src="https://www.navicat.com/link/Blog/Image/2024/20240208/materialized_view_in_database_Navigation_Pane.jpg" height="333" width="729" /></p>          </li>                                                     </ul>        </ul>    <h1 class="blog-sub-title">Conclusion</h1>    <p>PostgreSQL Materialized Views are a valuable tool for optimizing query performance in scenarios where real-time data is not critical. By pre-computing and storing the results of complex queries, Materialized Views can significantly improve response times for analytical and reporting tasks. In this tutorial, we learned how to create a Materialized View for the DVD Rental Database, showcasing their practical application in a real-world scenario.</p></body></html>]]></description>
</item>
<item>
<title>Getting Started with SQLite</title>
<link>https://www.navicat.com/company/aboutus/blog/2397-getting-started-with-sqlite.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>Getting Started with SQLite</title></head><body><b>Feb 2, 2024</b> by Robert Gravelle<br/><br/>    <p>SQLite is a lightweight, self-contained, and serverless relational database management system (RDBMS) that is        widely used for embedded systems, mobile applications, and small to medium-sized websites. It is easy to set up,        requires minimal configuration, and offers a powerful set of features for managing and manipulating data. In this        guide, we will walk you through the process of getting started with SQLite, including installation and using the        popular Chinook sample database for SQL examples.</p>    <h1 class="blog-sub-title">Installing SQLite</h1>    <h3>Windows:</h3>    <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">        <li>Visit the SQLite download page at <a class="default-links" href="https://www.sqlite.org/download.html" target="_blank">https://www.sqlite.org/download.html</a>.</li>        <li>Scroll down to the "Precompiled Binaries for Windows" section.</li>        <li>Download the appropriate precompiled binary for your system architecture (32-bit or 64-bit).</li>        <li>Extract the downloaded ZIP file to a location on your machine.</li>        <li>Open the extracted folder and locate the <code>sqlite3.dll</code> executable.</li>        <li>To make SQLite accessible from any command prompt window, add the folder containing <code>sqlite3.dll</code>            to your system's PATH environment variable.</li>    </ul>    <h3>macOS:</h3>    <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">        <li>SQLite is pre-installed on macOS, so there's no need for a separate installation.</li>        <li>Open the Terminal application.</li>        <li>Type <code>sqlite3</code> and press Enter to start the SQLite shell.</li>    </ul>    <h3>Linux:</h3>    <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">        <li>Most Linux distributions come with SQLite pre-installed. If not, you can install it using your package            manager.</li>        <ul>            <li>For Debian/Ubuntu: <code>sudo apt-get install sqlite3</code></li>            <li>For Red Hat/Fedora: <code>sudo dnf install sqlite</code></li>            <li>For Arch Linux: <code>sudo pacman -S sqlite</code></li>        </ul>        <li>Once installed, open the terminal and type <code>sqlite3</code> to start the SQLite shell.</li>    </ul>    <h1 class="blog-sub-title">Using the Chinook Sample Database in Navicat</h1>    <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">        <li>Download the <a class="default-links" href="https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip">Chinook database</a> ZIP file and extract its contents.        <p>You will find a file named <code>chinook.db</code>. Let's create a new database connection in Navicat.</p></li>        <li>Select File -> New Connection -> SQLite... from the main menu to launch the New Connection dialog:            <p><img alt="new_sqlite_connection_menu_item (64K)" src="https://www.navicat.com/link/Blog/Image/2024/20240202/new_sqlite_connection_menu_item.jpg" height="480" width="426" /></p>        </li>        <li>In the dialog, enter "Chinook" for the Connection Name and then click on the Ellipsis button [...] to navigate to the Database File.         Click the Test Connection to verify that we can connect to the database. (Note that the Chinook database does not require a user name or password):        <p><img alt="new_sqlite_connection_dialog (55K)" src="https://www.navicat.com/link/Blog/Image/2024/20240202/new_sqlite_connection_dialog.jpg" height="707" width="562" /></p>        </li>        <li>Click the OK button to close the dialog.  You should see our new connection in the Connections pane:        <p><img alt="chinook_in_connections_pane (35K)" src="https://www.navicat.com/link/Blog/Image/2024/20240202/chinook_in_connections_pane.jpg" height="460" width="217" /></p>        </li>    </ul>    <h1 class="blog-sub-title">Basic SQL Operations with Chinook</h1>    <h3>Connecting to the Chinook Database:</h3>    <p>Now that we've created a new connection for the Chinook database, let's open the connection so that we can interact with the database. To do that:</p>    <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">      <li>Locate the Chinook item in the Connections Pane and click on it in order to highlight it. </li>      <li>Select File -> Open Connection from the main menu. That should show the main database.</li>    </ul>        <h3>Querying Data</h3>    <p>To retrieve information from the Chinook database, you can use the <code>SELECT</code> statement. For        example "SELECT * FROM artists;":</p>    <img alt="select_artists_query (120K)" src="https://www.navicat.com/link/Blog/Image/2024/20240202/select_artists_query.jpg" height="873" width="492" />    <h3>Filtering Data</h3>    <p>Filtering allows you to narrow down your results. For instance, try "SELECT trackid, name, composer FROM tracks WHERE composer = 'Ludwig van Beethoven';":</p>    <img alt="select_specific_artist_query (83K)" src="https://www.navicat.com/link/Blog/Image/2024/20240202/select_specific_artist_query.jpg" height="346" width="704" />    <h3>Updating Records</h3>    <p>To update existing data, we can use the <code>UPDATE</code> statement, or simply edit the data in place!</p>    <img alt="editing_a_record (141K)" src="https://www.navicat.com/link/Blog/Image/2024/20240202/editing_a_record.jpg" height="479" width="738" />    <h3>Inserting Records</h3>    <p>To add a new record, no need to use the <code>INSERT</code> statement; in Navicat, we can simply click the Add Record button:</p>    <img alt="add_record_button (24K)" src="https://www.navicat.com/link/Blog/Image/2024/20240202/add_record_button.jpg" height="218" width="325" />    <p>That will append a new empty row to the table, ready for data entry:</p>    <img alt="new_record (11K)" src="https://www.navicat.com/link/Blog/Image/2024/20240202/new_record.jpg" height="100" width="272" />        <h3>Deleting Records</h3>    <p>Deleting a record in Navicat is equally straightforward; just highlight the row to remove and click the Delete key. A dialog will appear, asking for confirmation:</p>    <img alt="deleting_a_record (38K)" src="https://www.navicat.com/link/Blog/Image/2024/20240202/deleting_a_record.jpg" height="220" width="499" />    <p>In today's blog, we learned how to get started with SQLite, including the installation process and how to perform basic SQL operations against the         popular Chinook sample database. Whether you're a beginner or an experienced developer, SQLite's simplicity and versatility make it        an excellent choice for various applications. Moreover, <a class="default-links" href="https://www.navicat.com/products/navicat-for-sqlite" target="_blank">Navicat for SQLite</a> (or <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium</a>) 16 is the perfect tool to explore SQLite's        more advanced features and capabilities and to efficiently manage your data.</p></body></html>]]></description>
</item>
<item>
<title>Create Custom Metrics In Navicat Monitor 3</title>
<link>https://www.navicat.com/company/aboutus/blog/2395-create-custom-metrics-in-navicat-monitor-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Create Custom Metrics In Navicat Monitor 3</title></head><body><b>Jan 26, 2024</b> by Robert Gravelle<br/><br/><p><a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">Navicat Monitor 3</a> is a safe, simple and agentless remote server monitoring tool that includes many powerful features to make your monitoring effective as possible. You can access Navicat Monitor from anywhere via a web browser to access statistics on server load and performance regarding its availability, disk usage, network I/O, table locks and more.</p><img alt="db_metrics (39K)" src="https://www.navicat.com/link/Blog/Image/2024/20240126/db_metrics.jpg" height="525" width="258" /><p>Did you know that you can also collect custom performance metrics for specific instances using your own query, and receive alerts about your custom data when the metric value passes certain thresholds and durations?  Custom metrics can even be displayed as charts to help better understand your data and quickly identify trends. In today's blog we'll create a custom metric that shows the average cost of movie rentals in the Sakila Sample Database.</p><h1 class="blog-sub-title">Creating a Custom Metric</h1><p>You'll find the Custom Metrics page in the ALERT &amp; REPORT section on the Configurations tab:</p><img alt="custom_metrics_link (98K)" src="https://www.navicat.com/link/Blog/Image/2024/20240126/custom_metrics_link.jpg" height="801" width="732" /><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>In the Custom Metrics page, click the "+ New Custom Metric" button to bring up the New Custom Metric dialog.</li><li>The first step is to provide the METRIC DETAILS, which include the METRIC NAME and DESCRIPTION. We'll call our metric "Average Payment". (Note that the name cannot include the  "&lt;", "&gt;", ":", "&quot;", "/", "|", "?", "*" characters or any OS reserved names.) For the DESCRIPTION, let's add "The average payment in the Sakila payment table.".</li><li>For the DATABASE TYPE, we'll select "MySQL".</li><li>We can collected metrics from all instances to one specific database. In our case, we'll select our MySQL instance.</li><li>In the QUERY field, it's important to note that the query must return a single, numeric scalar or NULL value. Therefore, we should, apply an aggregate function to the column of interest, such as AVG, MIN, or MAX. In our case, we'll use the AVG function: "SELECT AVG(amount) FROM sakila.payment;". Be sure to prefix the table with the DB name and then run the query in the Navicat client to make sure that it returns a single value. </li><li>Now let's click the Test Metric Collection button to verify that data can be successfully collected from selected instances within a reasonable duration. Here's what our query produced:<p><img alt="Test_Metric_Collection_results (35K)" src="https://www.navicat.com/link/Blog/Image/2024/20240126/Test_Metric_Collection_results.jpg" height="382" width="793" /></p></li><li>For the DATA DISPLAY we can choose to use collected or calculated values. Collected Values are the actual values collected after running the query whereas calculated values use a calculated rate of change between collections, which measure the difference of the metric value divided by the number of seconds between each collection. The latter is useful in situations where new values are collected very frequently. We'll stick with the Collected Values for our metrics.<p>Here's what we've got so far:<br /><img alt="New_Custom_Metrics_screen_details (55K)" src="https://www.navicat.com/link/Blog/Image/2024/20240126/New_Custom_Metrics_screen_details.jpg" height="728" width="714" /></p></li><li>On the Next screen, we can add an alert for our Custom Metric. We might want to do so if our metric was related to the server's health, but since ours is really only informational, we'll move the Enable Alert slider to the off position. That will grey out the rest of the screen, with the exception of the ALERT NAME field, which is required. We'll call ours "Average Payment Alert":<p><img alt="alerts_disabled (63K)" src="https://www.navicat.com/link/Blog/Image/2024/20240126/alerts_disabled.jpg" height="880" width="709" /></p></li><li>The next and final screen shows a summary of our new Custom Metric. There, we can ENABLE (or disable) DATA COLLECTION a well as ENABLE (or disable) ALERTs:<p><img alt="summary_screen (32K)" src="https://www.navicat.com/link/Blog/Image/2024/20240126/summary_screen.jpg" height="478" width="731" /></p></li><li>Clicking the Create Custom Metric button will close the dialog and show our new Custom Metric in the list:<p><img alt="Average_Payment_custom_metric_in_Custom_Metrics_list (23K)" src="https://www.navicat.com/link/Blog/Image/2024/20240126/Average_Payment_custom_metric_in_Custom_Metrics_list.jpg" height="186" width="757" /></p></li></ul><h1 class="blog-sub-title">Conclusion</h1><p><a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">Navicat Monitor 3</a>'s Custom Metrics are the perfect tool for tracking data that is meaningful to you and your organization. Moreover, by viewing changes over time in an area or line chart allow you to better spot helpful patterns. Finally, alerts can inform you of potential opportunities or dangers as soon as possible so that you can respond in a timely fashion.</p></body></html>]]></description>
</item>
<item>
<title>3 Things You Should Never Store in Your Database</title>
<link>https://www.navicat.com/company/aboutus/blog/2389-3-things-you-should-never-store-in-your.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>  <title>3 Things You Should Never Store in Your Database</title></head><body><b>Jan 19, 2024</b> by Robert Gravelle<br/><br/>  <p>In the digital age, databases play a vital role in managing and organizing information for countless applications    and systems. As stewards of valuable data, it's essential for businesses and developers to be mindful of the types    of information stored in their databases. While databases are designed to efficiently handle data, there are certain    types of information that should almost never be stored in a database. In this article, we'll explore three things you should    avoid storing in your database to maintain data integrity, security, and compliance.</p>  <h1 class="blog-sub-title">1. Duplicate and Redundant Data</h1>  <p>Storing duplicate or redundant data in your database might seem harmless at first, but it can lead to a host of    issues down the line. Duplicate data not only wastes storage space but also introduces the risk of inconsistencies    and errors. When information is duplicated across multiple records, updating one instance may be overlooked,    resulting in discrepancies that can compromise the accuracy of your data.</p>  <p>To address this, databases should be designed with normalization principles in mind. Normalization involves    organizing data to minimize redundancy and dependency, ensuring that each piece of information is stored in one    place. By doing so, you not only optimize storage but also enhance data consistency and integrity.</p>  <h1 class="blog-sub-title">2. Credit Card Information</h1>  <p>In the realm of online transactions and e-commerce, the protection of financial information is paramount. Storing    credit card information in a database poses significant risks and raises serious concerns about compliance with    industry standards such as the Payment Card Industry Data Security Standard (PCI DSS). The PCI DSS outlines strict    guidelines for handling and securing credit card data to prevent fraud and protect consumers.</p>  <p>Rather than storing credit card information directly, businesses should leverage secure payment gateways. Payment    gateways facilitate the secure transmission of credit card information between the customer, the merchant, and the    financial institution. This not only reduces the risk of data breaches but also ensures compliance with industry    regulations.</p>  <h1 class="blog-sub-title">3. Sensitive Personal Identifiable Information (PII)</h1>  <p>Storing sensitive personal information, such as social security numbers, passport details, or driver's license    numbers, in a database without proper safeguards is a recipe for disaster. PII is a prime target for identity    theft and can be misused for fraudulent activities if it falls into the wrong hands. Even if encryption is applied    to the database, the risk remains high, as decryption keys could potentially be compromised.</p>  <p>To mitigate this risk, it's advisable to implement tokenization or pseudonymization techniques for handling PII.    Tokenization involves replacing sensitive data with unique tokens, rendering the original information unreadable.    Pseudonymization involves replacing or encrypting sensitive identifiers with reversible algorithms, ensuring data    protection while maintaining usability for authorized users.</p>  <h1 class="blog-sub-title">Exceptions and Considerations</h1>  <p>While some values may be derived from other fields, exceptions may be made for performance reasons. In cases where you have millions of records, it's sometimes preferable to fetch the actual value from the database as opposed to cycling through records and dynamically calculating the answer every time. With that in mind, here are a couple of fields that you may want to store in your database:</p>  <h3>Retail Price</h3>   <p>The retail price of an item is often calculated as the cost plus the tax. However, this seemingly simple concept    introduces complexities when underlying prices change or new sales taxes go into effect. Storing the calculated    price in the database requires an 'as of date' along with it for context. This allows for a historical view of    prices, ensuring accurate records even when factors affecting pricing change over time.</p>  <h3>Age</h3>  <p>Storing age information may seem unnecessary when you have someone's birthday and today's date. However, considering    that age changes over time, storing the 'as of date' of the record and the 'as of age' at the time of storage    eliminates the need for sometimes tricky calculations. This approach ensures that age-related information remains    accurate, providing a snapshot of the individual's age at a specific point in time.</p>  <p>If you ever need to store a calculated field, you can quickly create an insert trigger in <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat 16</a>.</p>    <img alt="navicat-trigger (53K)" src="https://www.navicat.com/link/Blog/Image/2024/20240119/navicat-trigger.jpg" height="272" width="477" />     <h1 class="blog-sub-title">Conclusion</h1>    <p>Knowing which pieces of data to include in your database is just as important as understanding what to exclude. By    avoiding the storage of duplicate and redundant data, sensitive personal information, and certain types of    information better suited for dynamic calculations, you not only optimize storage but also enhance data    consistency, integrity, and security.</p></body></html>]]></description>
</item>
<item>
<title>Some Tips for Securing Your Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/2387-some-tips-for-securing-your-relational-databases.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>  <title>Some Tips for Securing Your Relational Databases</title></head><body><b>Jan 12, 2024</b> by Robert Gravelle<br/><br/>  <p>In today's digital age, data is the lifeblood of organizations. As such, securing that data has never been more crucial. To safeguard sensitive data from unauthorized access, breaches, and other security threats, it's essential to implement robust security measures. This article will offer a few such measures for securing your relational databases.</p>  <h1 class="blog-sub-title">Access Control</h1>  <p>One of the fundamental principles of database security is controlling access to stored data. Implementing strong access controls ensures that only authorized users can interact with the database. This involves assigning unique user accounts with specific permissions based on their roles and responsibilities.</p> <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><strong>User Authentication:</strong> Enforce strong password policies and use multi-factor authentication to add an extra layer of security. This helps prevent unauthorized access, even if login credentials are compromised.</li>    <li><strong>Role-Based Access Control (RBAC):</strong> Implement RBAC to assign permissions based on users' roles within the organization. This minimizes the risk of users having unnecessary access to sensitive data.</li>  </ul>  <h1 class="blog-sub-title">Encryption</h1>  <p>Encrypting data at rest and in transit is crucial to protect it from unauthorized access and interception.</p>  <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><strong>Data at Rest Encryption:</strong> Utilize encryption algorithms to secure data stored on disk or in backups. This prevents unauthorized access to the physical storage media, adding an additional layer of protection.</li>    <li><strong>Secure Sockets Layer (SSL) or Transport Layer Security (TLS):</strong> Encrypt data in transit by using SSL or TLS protocols. This ensures that communication between the database server and client applications is secure and cannot be easily intercepted.</li>  </ul>  <h1 class="blog-sub-title">Backup and Recovery</h1>  <p>A solid backup and recovery strategy is essential to mitigate the impact of data loss or corruption due to security incidents.</p>  <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><strong>Regular Backups:</strong> Schedule regular backups of the database and verify their integrity. Store backups in a secure location separate from the production environment.</li>    <li><strong>Point-in-Time Recovery:</strong> Implement point-in-time recovery mechanisms to restore the database to a specific state before a security incident occurred.</li>  </ul>  <h1 class="blog-sub-title">Employee Training and Awareness</h1>  <p>Human error is a common cause of security breaches. Educate employees about security best practices and the importance of safeguarding sensitive information. Conduct regular security awareness training to keep employees informed about the latest security threats and best practices. This empowers them to recognize and report potential security incidents.</p>  <h1 class="blog-sub-title">Real-time Monitoring</h1>  <p> Utilize real-time monitoring tools to track performance metrics and identify anomalies or potential security threats.  That's where a tool such as <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor 3</a> can help. It's a comprehensive database monitoring and performance optimization tool designed for MySQL, MariaDB, PostgreSQL and SQL Server, as well as cloud databases like Amazon RDS, Amazon Aurora, Oracle Cloud, Google Cloud and Microsoft Azure. Navicat Monitor 3 offers several features that contribute to the overall security of relational databases, including:</p>  <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><strong>Real-time Monitoring:</strong> Navicat Monitor 3 provides real-time monitoring of various performance metrics, including CPU usage, memory usage, and disk I/O. By staying informed about the database's health, administrators can quickly identify abnormal patterns that may indicate a security incident.</li>    <li><strong>Alerting and Notification:</strong> The tool offers customizable alerts and notifications for performance-related issues. By configuring alerts for specific thresholds, administrators can receive immediate notifications of potential security threats or unusual database activities.</li>    <li><strong>Historical Data Analysis:</strong> The ability to analyze historical performance data helps administrators identify patterns or trends that may indicate security issues. This feature enhances the proactive identification of potential threats before they escalate.</li>  </ul><br>  <figure>    <figcaption>Navicat Monitor 3 Dashboard</figcaption>    <img src="https://www.navicat.com/link/Blog/Image/2024/20240112/navicat%20monitor%20dashboard.jpg" alt="Navicat Monitor 3 Dashboard" />  </figure>  <h1 class="blog-sub-title">Conclusion:</h1>  <p>Securing a relational database is a multifaceted task that requires a combination of access controls, encryption, monitoring, and proactive management. By implementing the security measures outlined in this article and leveraging tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor 3</a>, you can fortify your organization's defenses against potential security threats and ensure the integrity and confidentiality of their valuable data.</p></body></html>]]></description>
</item>
<item>
<title>Choosing the Right Storage Engine for MySQL Tables</title>
<link>https://www.navicat.com/company/aboutus/blog/2385-choosing-the-right-storage-engine-for-mysql-tables.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Choosing the Right Storage Engine for MySQL Tables</title></head><body>    <b>Jan 5, 2024</b> by Robert Gravelle<br/><br/>    <p>MySQL, one of the most popular relational database management systems, offers a variety of storage engines, each designed to cater to specific needs and use cases. When it comes to optimizing your database performance and ensuring data integrity, selecting the right storage engine is crucial. In today's blog, we'll explore the key factors to consider when choosing a storage engine for your MySQL tables.</p>    <h1 class="blog-sub-title">Understanding Storage Engines</h1>    <p>MySQL supports multiple storage engines, each with its own set of features, strengths, and weaknesses. The storage engine is responsible for handling the storage, retrieval, and management of data in the database tables. While InnoDB and MyISAM are by far the most commonly used engines, there are several others to consider.</p>    <h3>Consider Your Usage Patterns</h3>    <p>The first step in choosing a storage engine is understanding your specific usage patterns. Different storage engines are optimized for various scenarios. Choices include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">        <li><strong>InnoDB:</strong> This is the default storage engine for MySQL and is well-suited for applications with high write-intensive workloads and transactions. InnoDB provides ACID compliance, ensuring data consistency and reliability.</li>        <li><strong>MyISAM:</strong> If your application has more read-intensive operations and doesn't require transactions, MyISAM might be a good choice. It performs well for scenarios like data warehousing and read-heavy reporting.</li>        <li><strong>MRG_MyISAM:</strong> A merge storage engine that allows you to create tables that are a collection of other MyISAM tables. Useful for managing large datasets spread across multiple tables.</li>        <li><strong>MEMORY:</strong> This storage engine stores all data in RAM, making it ideal for scenarios where fast access to data is critical. However, it's important to note that data stored in the MEMORY engine is volatile and doesn't persist across server restarts.</li>        <li><strong>Blackhole:</strong> Acts as a "black hole" where data is accepted but not stored. Useful for scenarios where you want to replicate data to other servers without actually storing it locally.</li>        <li><strong>CSV:</strong> Stores data in text files using the CSV format. Suitable for data exchange between databases and applications that use CSV files.</li>        <li><strong>Performance_Schema:</strong> A storage engine that provides performance-related information about server execution at runtime. Helpful for monitoring and optimizing performance.</li>        <li><strong>ARCHIVE:</strong> This engine is optimized for storing large amounts of data with minimal space requirements. It's suitable for archiving purposes where fast data retrieval is not a primary concern.</li>    </ul>    <h3>Comparing InnoDB to MyISAM</h3>    <p>Since InnoDB and MyISAM are the most popular storage engines, let's take a moment to consider each engine's strengths and weaknesses in terms of transactional capabilities, data integrity, reliability, and performance.</p>    <p>If your application involves complex transactions and requires features like rollbacks and savepoints, InnoDB is a strong candidate. It provides full ACID compliance, ensuring that transactions are handled reliably. On the other hand, if your application doesn't rely heavily on transactions and can tolerate occasional data inconsistencies, a storage engine like MyISAM may be more suitable. MyISAM doesn't support transactions to the same extent as InnoDB, but it can perform well for read-heavy workloads.</p>    <p>For applications where data integrity is paramount, InnoDB is often the preferred choice. InnoDB uses a clustered index and supports foreign key constraints, ensuring referential integrity between tables. This is crucial for applications where maintaining data consistency is a top priority. If your application can tolerate a lower level of data integrity, MyISAM might be considered. MyISAM doesn't support foreign key constraints and is more prone to table-level corruption in the event of a crash. Therefore, it's essential to weigh the trade-offs between performance and data reliability.</p>    <p>Performance is a critical factor in choosing a storage engine. InnoDB is known for its excellent performance in write-intensive scenarios due to its support for multi-version concurrency control (MVCC). It uses row-level locking, reducing contention and allowing for better concurrency. MyISAM, on the other hand, excels in read-intensive workloads. It uses table-level locking, which can impact concurrency in write-heavy scenarios but allows for faster read operations.</p>    <h1 class="blog-sub-title">Selecting a Storage Engine in Navicat</h1>    <p>Since each table can have its own storage engine in MySQL, <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> displays it in the Table Objects Explorer, along with other pertinent information, such as the latest Auto Increment Value, last Modified Date, Data Length, and number of Rows:</p>    <img alt="Navicat_table_properties (132K)" src="https://www.navicat.com/link/Blog/Image/2024/20240105/Navicat_table_properties.jpg" height="494" width="801" />    <p>To set or change a table's storage engine, open the Table Designer and click on the Options tab. There you'll find a drop-down of supported Engines as well as a number of other relevant fields:</p>    <img alt="storage_engines_in_navicat (38K)" src="https://www.navicat.com/link/Blog/Image/2024/20240105/storage_engines_in_navicat.jpg" height="245" width="510" />    <p>Different storage engines come with their own attributes, so the other configurable options will depend on the Engine you choose.  For example, here are the fields for the InnoDB Engine:</p>    <img alt="InnoDB_engine_properties (62K)" src="https://www.navicat.com/link/Blog/Image/2024/20240105/InnoDB_engine_properties.jpg" height="558" width="506" />    <p>Meanwhile, the MEMORY Engine offers fewer configuration options:</p>    <img alt="Memory_engine_properties (45K)" src="https://www.navicat.com/link/Blog/Image/2024/20240105/Memory_engine_properties.jpg" height="377" width="507" />    <h1 class="blog-sub-title">Conclusion</h1>    <p>Selecting the appropriate storage engine for your MySQL tables is a critical decision that directly impacts your application's performance, reliability, and scalability. By carefully considering your usage patterns, transactional requirements, data integrity needs, performance considerations, and exploring specialized storage engines, you can make an informed decision that aligns with your organization's goals.</p></body></html>]]></description>
</item>
<item>
<title>Configuring Editor Settings in Navicat</title>
<link>https://www.navicat.com/company/aboutus/blog/2383-configuring-editor-settings-in-navicat.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Configuring Editor Settings in Navicat</title></head><body><b>Dec 29, 2023</b> by Robert Gravelle<br/><br/><p>Navicat, a powerful database management tool, offers a robust environment for developers and database administrators alike. One of its key features is the SQL Editor, where users can write and execute SQL queries. The Editor settings in Navicat allow users to tailor their working environment to meet specific personal and organizational preferences. In this blog article, we'll explore the various configuration options available in Navicat's SQL Editor.</p><h1 class="blog-sub-title">Accessing the Editor Settings</h1><p>All of the configuration options for Navicat SQL Editors are conveniently located in one place. You'll find them on the Editor screen of the Options dialog. To bring up the Editor settings, click Tools -> Options... on the Main Menu to display the Options dialog and then select "Editor" in the left-hand pane:</p><img alt="editor_screen (64K)" src="https://www.navicat.com/link/Blog/Image/2023/20231229/editor_screen.jpg" height="627" width="882" /><h1 class="blog-sub-title">Code Formatting</h1><p>One of the first things you might want to customize in Navicat's SQL Editor is code formatting. Properly formatted code enhances readability and makes it easier to collaborate with team members. Navicat allows you to define indentation, font type and many other options.</p><p>To format the contents of the SQL Editor at any time, click the "Beautify SQL" button located on the Editor Toolbar.</p><p>Here's a before and after example:</p><img alt="code_formatting (32K)" src="https://www.navicat.com/link/Blog/Image/2023/20231229/code_formatting.jpg" height="275" width="377" /><h1 class="blog-sub-title">Choosing a Color Scheme</h1><p>Creating an appealing and clear color scheme can significantly improve your coding experience. Navicat allows you customize the color of common text, keywords, strings, numbers, comments, and even the background, allowing you to define a color scheme that aligns with your preferences.</p><p>To set a color, click on the square sample block to the right of the target property to open the system color dialog.  There, you can choose a predefined color, saved custom color, or create a brand new custom color!</p><img alt="color_dialog (67K)" src="https://www.navicat.com/link/Blog/Image/2023/20231229/color_dialog.jpg" height="502" width="548" /><p>Here's a query using Courier font at 12pt with custom colors:</p><img alt="custom_font_and_colors (62K)" src="https://www.navicat.com/link/Blog/Image/2023/20231229/custom_font_and_colors.jpg" height="280" width="704" /><h1 class="blog-sub-title">Highlighting Matching Brackets</h1><p>Ensuring that matching brackets are easily identifiable is crucial for avoiding syntax errors. Navicat provides an option to highlight matching brackets, making it easier to navigate and troubleshoot your code.</p><p>Placing the cursor at the opening brace highlights both the opening and closing braces:</p><img alt="matching_brackets (14K)" src="https://www.navicat.com/link/Blog/Image/2023/20231229/matching_brackets.jpg" height="62" width="368" /><h1 class="blog-sub-title">Disabling Syntax Highlighting</h1><p>There are times that you may want to switch off syntax highlighting. This can be accomplished by deselecting the "Use syntax highlighting" checkbox. Syntax highlighting can also be turned off for large files by specifying a maximum MB value in the "Disable if file size is larger than (MB)" textbox.</p><img alt="query_without_syntax_highlighting (55K)" src="https://www.navicat.com/link/Blog/Image/2023/20231229/query_without_syntax_highlighting.jpg" height="265" width="556" /><h1 class="blog-sub-title">Miscellaneous Options</h1><p>Here are a few other settings that you're sure to find useful:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li><strong>Show line number:</strong> Display line numbers at the left side of the editor for easily reference.</li><li><strong>Use code folding:</strong> Code folding allows codes to collapse as a block and only the first line displayed in the editor.</li><li><strong>Tab Width:</strong> Enter the number of characters that a tab occupies, e.g. 5.</li></ul><h1 class="blog-sub-title">Final Thoughts on Configuring Editor Settings in Navicat</h1><p>Customizing syntax highlighting in Navicat's SQL Editor is more than just about visual preferences; it's a practical way to enhance your coding efficiency and reduce the likelihood of errors. By carefully choosing colors, customizing styles, and enabling features like bracket highlighting, you can create a coding environment that aligns perfectly with your workflow. Take the time to explore these options and discover how they can elevate your SQL coding experience in <a class="default-links" href="https://navicat.com/products/navicat-premium/" target="_blank">Navicat</a>!</p></body></html>]]></description>
</item>
<item>
<title>Introduction to Aggregate Queries</title>
<link>https://www.navicat.com/company/aboutus/blog/2381-introduction-to-aggregate-queries.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Introduction to Aggregate Queries</title></head><body><b>Dec 21, 2023</b> by Robert Gravelle<br/><br/>    <p>While basic SQL queries can retrieve, insert, update, and delete records, aggregate queries take database interactions to a new level by providing the sums, averages, or min/max value from a large result set. In this article, we'll explore the fundamentals of aggregate SQL queries, examining how they can be employed to analyze and summarize data effectively.</p>        <h1 class="blog-sub-title">Understanding Aggregate Functions</h1>    <p>Aggregate functions in SQL operate on sets of rows and return a single value as output. These functions are invaluable when it comes to performing calculations on data within a database. Some of the commonly used aggregate functions include:</p>    <h3>1. COUNT()</h3>    <p>The <code>COUNT()</code> function tallies the number of rows that meet a specified condition. It can be used to count all rows or those satisfying certain criteria.</p>    <pre>        <code>SELECT COUNT(*) AS total_records FROM employees;</code>    </pre>    <h3>2. SUM()</h3>    <p>The <code>SUM()</code> function calculates the total sum of a numeric column.</p>    <pre>        <code>SELECT SUM(salary) AS total_salary FROM employees;</code>    </pre>    <h3>3. AVG()</h3>    <p>The <code>AVG()</code> function determines the average value of a numeric column.</p>    <pre>        <code>SELECT AVG(age) AS average_age FROM students;</code>    </pre>    <h3>4. MAX() and MIN()</h3>    <p><code>MAX()</code> and <code>MIN()</code> functions identify the maximum and minimum values in a column, respectively.</p>    <pre>        <code>SELECT MAX(price) AS max_price, MIN(price) AS min_price FROM products;</code>    </pre>        <h1 class="blog-sub-title">Grouping Data with GROUP BY</h1>    <p>One of the powerful aspects of aggregate queries in SQL is the ability to group data based on certain criteria using the <code>GROUP BY</code> clause. This facilitates the analysis of subsets of data, allowing for more granular insights.</p>    <h3>Grouping with COUNT()</h3>    <pre>        <code>SELECT department, COUNT(*) AS employee_countFROM employeesGROUP BY department;</code>    </pre>    <h3>Grouping with AVG()</h3>    <pre>        <code>SELECT department, AVG(salary) AS average_salaryFROM employeesGROUP BY department;</code>    </pre>        <h1 class="blog-sub-title">Filtering Groups with HAVING</h1>    <p>The <code>HAVING</code> clause is used in conjunction with <code>GROUP BY</code> to filter the results of aggregate queries based on conditions applied to grouped data.</p>    <pre>        <code>SELECT department, AVG(salary) AS average_salaryFROM employeesGROUP BY departmentHAVING AVG(salary) > 50000;</code>    </pre>        <h1 class="blog-sub-title">Combining Aggregate Functions</h1>    <p>SQL allows for the combination of multiple aggregate functions in a single query, offering comprehensive insights into the data.</p>    <pre>        <code>SELECT department, COUNT(*) AS employee_count, AVG(salary) AS average_salaryFROM employeesGROUP BY department;</code>    </pre>        <h1 class="blog-sub-title">Using Aggregate Functions in Navicat</h1>        <p>If you're ever unsure of a function's exact name or input parameters, you can start typing it in the SQL Editor and <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> will present a list of matching options that you can select from to autocomplete a term. Aggregate functions are identified by the Greek Sigma symbol (&Sigma;):</p>        <img alt="AVG_function_in_autocomplete_list (57K)" src="https://www.navicat.com/link/Blog/Image/2023/20231221/AVG_function_in_autocomplete_list.jpg" height="313" width="631" />            <h1 class="blog-sub-title">Conclusion</h1>    <p>Aggregate SQL queries are indispensable tools for data analysis and reporting in relational databases.        Whether you're summarizing information, calculating averages, or grouping data based on certain criteria, understanding how to leverage aggregate functions and clauses like <code>GROUP BY</code> and <code>HAVING</code> is essential for proficient database querying.</p></body></html>]]></description>
</item>
<item>
<title>Measuring Query Execution Time in Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/2379-measuring-query-execution-time-in-relational-databases.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>Measuring Query Execution Time in Relational Databases</title></head><body><b>Dec 15, 2023</b> by Robert Gravelle<br/><br/>    <p>In the realm of database optimization, understanding and monitoring query execution time is crucial. Whether you're a database administrator, developer, or involved in performance tuning, knowing how to measure the time a query takes to execute can provide valuable insights into the efficiency of your database operations. In this article, we'll explore various techniques for measuring query execution time in popular relational databases such as MySQL, PostgreSQL, and Microsoft SQL Server.</p>    <h1 class="blog-sub-title">MySQL</h1>    <h3>Using SQL Profiler:</h3>    <pre>        <code>SET profiling = 1;-- Your SQL Query Goes HereSHOW PROFILES;        </code>    </pre>    <p>This sequence of commands enables profiling, executes your query, and then shows the profiling results. Look for the "Duration" column to find the execution time in seconds.</p>    <img alt="profiling_results (131K)" src="https://www.navicat.com/link/Blog/Image/2023/20231215/profiling_results.jpg" height="725" width="564" />    <p>To calculate the total duration, you can use the following SQL query:</p>    <pre>        <code>SELECT SUM(Duration) AS TotalDurationFROM information_schema.profilingWHERE Query_ID > 1;        </code>    </pre>        <img alt="summing_duration (24K)" src="https://www.navicat.com/link/Blog/Image/2023/20231215/summing_duration.jpg" height="207" width="384" />    <h1 class="blog-sub-title">PostgreSQL</h1>    <h3>Enabling Timing:</h3>    <p>PostgreSQL has a built-in feature to measure query execution time. You can enable timing by executing the following command:</p>    <pre>        <code>\timing-- Your SQL Query Goes Here        </code>    </pre>    <p>This will display the time taken to execute your query in milliseconds.</p>    <h3>Using pg_stat_statements:</h3>    <p>PostgreSQL comes with an extension called pg_stat_statements, which provides a detailed view of executed SQL statements. To use it, ensure the extension is enabled in your PostgreSQL configuration and execute the query:</p>        <pre>        <code>SELECT total_time, calls, queryFROM pg_stat_statementsWHERE query = 'Your SQL Query Goes Here';        </code>    </pre>    <p>This will give you information about the total time spent executing the specified query.</p>    <h1 class="blog-sub-title">Microsoft SQL Server</h1>    <h3>Using SET STATISTICS TIME:</h3>    <p>SQL Server allows you to enable time statistics for a session using the SET STATISTICS TIME ON command. After executing your query, you'll receive a message in the "Messages" tab showing the total time:</p>    <pre>        <code>SET STATISTICS TIME ON-- Your SQL Query Goes HereSET STATISTICS TIME OFF        </code>    </pre>    <h3>Querying sys.dm_exec_query_stats:</h3>    <p>For a more programmatic approach, you can query the sys.dm_exec_query_stats dynamic management view to get information about query execution times:</p>    <pre>        <code>SELECT total_elapsed_time, execution_count, textFROM sys.dm_exec_query_statsCROSS APPLY sys.dm_exec_sql_text(sql_handle)WHERE text LIKE 'Your SQL Query Goes Here%';        </code>    </pre>    <p>This query retrieves information about the total elapsed time and the number of times the query has been executed.</p>    <h1 class="blog-sub-title">Oracle Database</h1>    <h3>Using SQL*Plus AUTOTRACE:</h3>    <p>Oracle Database provides the SQL*Plus AUTOTRACE feature, which can be used to display execution plans and statistics for SQL statements. To enable it, use the following commands:</p>    <pre>        <code>SET AUTOTRACE ON-- Your SQL Query Goes HereSET AUTOTRACE OFF        </code>    </pre>    <p>The output will include information about the elapsed time, CPU time, and other statistics.</p>    <h3>Querying V$SQL:</h3>    <p>For more detailed information, you can query the V$SQL dynamic performance view:</p>        <pre>        <code>SELECT elapsed_time, executions, sql_textFROM V$SQLWHERE sql_text LIKE 'Your SQL Query Goes Here%';        </code>    </pre>    <p>This query retrieves information about the elapsed time and the number of executions for the specified query.</p>    <h1 class="blog-sub-title">Viewing Execution Time in Navicat</h1>    <p>If you only need to view the total execution time of a query, you can find it at the bottom of the main <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> application window, along with other pertinent query details:</p>    <img alt="query_information_in_navicat (46K)" src="https://www.navicat.com/link/Blog/Image/2023/20231215/query_information_in_navicat.jpg" height="251" width="716" />        <h1 class="blog-sub-title">Conclusion</h1>    <p>Understanding and optimizing query execution time is fundamental to maintaining a high-performing database. By leveraging the tools and techniques discussed in this article, you can gain valuable insights into your database's performance and take proactive steps to enhance efficiency. Whether you're working with MySQL, PostgreSQL, Microsoft SQL Server, or Oracle Database, measuring and analyzing query execution time is a worthwhile endeavor for any database professional.</p></body></html>]]></description>
</item>
<item>
<title>Choosing Between Redis and a Traditional Relational Database</title>
<link>https://www.navicat.com/company/aboutus/blog/2377-choosing-between-redis-and-a-traditional-relational-database.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>Choosing Between Redis and a Traditional Relational Database</title></head><body><b>Dec 8, 2023</b> by Robert Gravelle<br/><br/><p>When it comes to selecting the right database for your application, the decision often boils down to the specific requirements of your project. Redis, a high-performance in-memory data store, and traditional relational databases such as MySQL, each offer their own strengths and weaknesses. In this guide, we will explore various factors to consider when deciding between Redis and a traditional relational database. For the sake of simplicity, we'll use MySQL as our traditional relational database. Should you decide to go that route, you may want to look at other relational database products such as SQL Server and Oracle.</p><h1 class="blog-sub-title">Data Model and Structure</h1><p>One of the primary differences between Redis and MySQL lies in their data models. Redis is a key-value store, where data is stored as pairs of keys and values. This simplicity makes it efficient for certain use cases like caching, session storage, and real-time analytics. On the other hand, as a relational database, MySQL allows you to define structured tables with relationships between them.</p><p><strong>Hash Data in Redis</strong></p><img alt="hash (78K)" src="https://www.navicat.com/link/Blog/Image/2023/20231208/hash.jpg" height="494" width="724" /><p><strong>A MySQL Table</strong></p><img alt="ups_table (195K)" src="https://www.navicat.com/link/Blog/Image/2023/20231208/ups_table.jpg" height="685" width="694" /><p>Consider your data structure and whether a key-value model or a relational model better suits your application's needs.</p><h1 class="blog-sub-title">Performance</h1><p>Redis is renowned for its exceptional performance, especially for read-heavy workloads and scenarios requiring low-latency responses. Being an in-memory database, Redis stores all data in RAM, providing fast access times. On the other hand, MySQL, while still performing well, might encounter bottlenecks as the dataset grows, especially in scenarios with complex queries and frequent write operations.</p><p><strong>Example: Redis Read Operation</strong></p><pre><code>// Retrieving data from RedisredisClient.get("user:123", (err, result) => {    const userData = JSON.parse(result);    console.log(userData);});</code></pre><p><strong>Example: MySQL Read Operation</strong></p><pre><code>-- Retrieving data from the users table in MySQLSELECT * FROM users WHERE id = 123;</code></pre><p>Consider the nature of your application's workload and whether the emphasis is on read or write operations.</p><h1 class="blog-sub-title">Persistence</h1><p>One key consideration is data persistence. Redis, being an in-memory store, may not be the best choice for scenarios where durability and persistence are critical. While Redis does offer persistence options, such as snapshots and append-only files, MySQL inherently provides more robust durability features.</p><p><strong>Example: Redis Snapshot Persistence</strong></p><pre><code>// Configuring Redis to take snapshots every 5 minutesconfig set save "300 1";</code></pre><p>Ensure your choice aligns with your application's requirements for data persistence.</p><h1 class="blog-sub-title">Scalability</h1><p>Scalability is another crucial factor. Redis excels in horizontal scalability, making it suitable for distributed setups and scenarios where you need to scale out across multiple nodes. MySQL, while also scalable, might require more effort and careful planning, especially in large-scale distributed environments.</p><p><strong>Example: Redis Horizontal Scaling</strong></p><pre><code>// Creating a Redis cluster with three nodesredis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002</code></pre><p><strong>Example: MySQL Sharding</strong></p><pre><code>-- Sharding the users table across multiple databases-- (Assuming a sharding key 'user_id')CREATE TABLE users_shard_1 SELECT * FROM users WHERE user_id % 3 = 1;CREATE TABLE users_shard_2 SELECT * FROM users WHERE user_id % 3 = 2;CREATE TABLE users_shard_3 SELECT * FROM users WHERE user_id % 3 = 0;</code></pre><p>Consider the scalability requirements of your application and whether your chosen database can scale accordingly.</p><h1 class="blog-sub-title">Use Case Considerations</h1><p>Understanding the specific use cases for Redis and MySQL is crucial for making an informed decision. With this in mind, here are the top three use cases of each database:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li><strong>Redis Use Cases:</strong>        <ul style="list-style-type: circle; margin-left: 24px; line-height: 24px;">            <li>Caching: Redis excels in caching due to its fast read access.</li>            <li>Real-time Analytics: Its in-memory nature is beneficial for quick data analysis.</li>            <li>Session Storage: Ideal for storing and managing session data.</li>        </ul>    </li>    <li><strong>MySQL Use Cases:</strong>        <ul style="list-style-type: circle; margin-left: 24px; line-height: 24px;">            <li>Transactional Data: MySQL is well-suited for applications requiring ACID compliance.</li>            <li>Complex Queries: If your application involves complex queries and reporting, MySQL might be a better fit.</li>            <li>Data Integrity: For scenarios where relational data integrity is a priority.</li>        </ul>    </li></ul><p>Consider the specific requirements of your project and how well each database aligns with those needs.</p><h1 class="blog-sub-title">Working with Redis</h1><p>One reservation you may have about going with Redis is that its syntax is so dissimilar to that of traditional databases. However, that need not be an issue. <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>, a powerful GUI tool designed to enhance the management and interaction with Redis databases, provides an intuitive interface for performing various tasks such as browsing, querying, and modifying data. </p> <figure>  <figcaption>Main Screen of Navicat for Redis on macOS</figcaption>  <img alt="Navicat for Redis Main Screen on macOS" src="https://www.navicat.com/link/Blog/Image/2023/20231208/Screenshot_Navicat_16.2_Redis_Mac_01_MainScreen.jpg" height="634" width="1064" /></figure> <h1 class="blog-sub-title">Conclusion</h1><p>Choosing between Redis and MySQL involves careful consideration of factors such as data model, performance, persistence, scalability, and use case requirements. Assessing these aspects in the context of your application's specific needs will guide you toward the most suitable database for your project.</p></body></html>]]></description>
</item>
<item>
<title>Formatting Dates and Times in Navicat</title>
<link>https://www.navicat.com/company/aboutus/blog/2373-formatting-dates-and-times-in-navicat.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Formatting Dates and Times in Navicat</title></head><body><b>Dec 1, 2023</b> by Robert Gravelle<br/><br/><p>One of the most common questions Navicat Support receives from users is how to format dates and times in both Grid and form View. It's actually quite simple! In today's blog, we'll go over the steps to change date and time formats globally in <a class="default-links" href="https://navicat.com/products/navicat-premium/"target="_blank">Navicat Premium</a>.</p><h1 class="blog-sub-title">Where Navicat Defines Display Formats</h1><p>You'll find display formats for dates and times on the Records screen of the Options dialog.  It is accessible via the Tools -> Options... command from the main menu:</p><img alt="options_command (43K)" src="https://www.navicat.com/link/Blog/Image/2023/20231201/options_command.jpg" height="265" width="429" /><p>We can see the Date, Time, and DateTime formats in the Display Format section of the Records screen (highlighted with a red border):</p><img alt="date_and_time_display_formats_on_the_records_screen (70K)" src="https://www.navicat.com/link/Blog/Image/2023/20231201/date_and_time_display_formats_on_the_records_screen.jpg" height="627" width="882" /><h1 class="blog-sub-title">Setting the Format</h1><p>Let's go ahead and update the DateTime format using the Sakila Sample Database as an example. Many of its tables contain a DateTime field called last_update that is used for auditing purposes. We can see it in this screen capture of the actor table (again highlighted with a red border):</p><img alt="last_update_column_in_sakila_actor_table (120K)" src="https://www.navicat.com/link/Blog/Image/2023/20231201/last_update_column_in_sakila_actor_table.jpg" height="446" width="698" /><p>By default, Navicat displays dates and times in whatever format they are defined in the database. In the case of MySQL, it displays DateTime values in 'YYYY-MM-DD hh:mm:ss' format, for example '2019-10-12 14:35:18' (Notice the use of the 24 hour clock).</p><h3>Standard SQL and ODBC Date and Time Literals</h3><p>If you're unsure of the meaning of the letters in the 'YYYY-MM-DD hh:mm:ss' string, those are part of the Standard SQL and ODBC Date and Time Literals.  These are standardized ways of representing date and time values in SQL queries. They provide a consistent and platform-independent method for specifying date and time values in SQL statements. Here's a list of each letter pattern and their meaning. You'll want to get acquainted with them because Navicat also uses them to set date and time formats:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>c - Display the date using the format given by the system global variable, followedby the time using the format given by the system global variable. The time is notdisplayed if the date-time value indicates midnight precisely.</li><li>d - Display the day as a number without a leading zero (1-31).</li><li>dd - Display the day as a number with a leading zero (01-31).</li><li>ddd - Display the day as an abbreviation (Sun-Sat).</li><li>dddd - Display the day as a full name (Sunday-Saturday).</li><li>ddddd - Display the date using the format given by the system global variable.</li><li>dddddd - Display the date using the format given by the system global variable.</li><li>m - Display the month as a number without a leading zero (1-12). If the m specifierimmediately follows an h or hh specifier, the minute rather than the month isdisplayed.</li><li>mm - Display the month as a number with a leading zero (01-12). If the mm specifierimmediately follows an h or hh specifier, the minute rather than the month isdisplayed.</li><li>mmm - Display the month as an abbreviation (Jan-Dec) using the strings given by thesystem global variable.</li><li>mmmm - Display the month as a full name (January-December) using the strings givenby the system global variable.</li><li>yy - Display the year as a two-digit number (00-99).</li><li>yyyy - Display the year as a four-digit number (0000-9999).</li><li>h - Display the hour without a leading zero (0-23).</li><li>hh - Display the hour with a leading zero (00-23).</li><li>n - Display the minute without a leading zero (0-59).</li><li>nn - Display the minute with a leading zero (00-59).</li><li>s - Display the second without a leading zero (0-59).</li><li>ss - Display the second with a leading zero (00-59).</li><li>t - Display the time using the format given by the system global variable.</li><li>tt - Display the time using the format given by the system global variable.</li><li>am/pm - Use the 12-hour clock for the preceding h or hh specifier, and display 'am' forany hour before noon, and 'pm' for any hour after noon. The am/pm specifiercan use lower, upper, or mixed case, and the result is displayed accordingly.</li><li>a/p - Use the 12-hour clock for the preceding h or hh specifier, and display 'a' for anyhour before noon, and 'p' for any hour after noon. The a/p specifier can uselower, upper, or mixed case, and the result is displayed accordingly.</li><li>ampm - Use the 12-hour clock for the preceding h or hh specifier, and display thecontents of the system global variable for any hour before noon, and thecontents of the system global variable for any hour after noon.</li><li>/ - Date separator. In some locales, other characters may be used to represent thedate separator.</li><li>: - Time separator. In some locales, other characters may be used to represent thetime separator.</li><li>'xx'/"xx" - Characters enclosed in single or double quotes are displayed as-is, with noformatting changes.</li></ul><p>Now, let's change the global Navicat DateTime format to utilize the numeric day without a leading zero, the three letter month abbreviation, and the 12-hour clock with the AM/PM indicator.</p><p>Using the above instructions as our guide, that would give us a format string of "mmm d, yyyy hh:mm:ss AM/PM". We can see the results in real-time in the Output field as we type:</p><img alt="output_field (16K)" src="https://www.navicat.com/link/Blog/Image/2023/20231201/output_field.jpg" height="176" width="422" /><p>After closing the Options dialog via the OK button, all DateTime fields should now be using our custom DateTime format. Here is the last_update column of the actor table mentioned previously: </p><img alt="last_update_column_in_sakila_actor_table_with_new_format (73K)" src="https://www.navicat.com/link/Blog/Image/2023/20231201/last_update_column_in_sakila_actor_table_with_new_format.jpg" height="320" width="447" /><p>Remember that the new format will apply globally across all databases. To confirm this, let's take a look at the orders table from the classicmodels database. It contains three DateTime columns, but only sets the date portion. These columns also display their values according to our new format:</p><img alt="classicmodels_orders_table_with_new_format (146K)" src="https://www.navicat.com/link/Blog/Image/2023/20231201/classicmodels_orders_table_with_new_format.jpg" height="348" width="864" /><h1 class="blog-sub-title">Conclusion</h1><p>In this blog, we learned how to easily change date and time formats globally in the Options dialog. While we used <a class="default-links" href="https://navicat.com/products/navicat-premium/" target="_blank">Navicat Premium</a> here today, note that other Navicat products, such as Navicat for MySQL or Navicat for SQL Server, would work in exactly the same way.</p></body></html>]]></description>
</item>
<item>
<title>Understanding Navicat Connection Profiles</title>
<link>https://www.navicat.com/company/aboutus/blog/2370-understanding-navicat-connection-profiles.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Understanding Navicat Connection Profiles</title></head><body><b>Nov 24, 2023</b> by Robert Gravelle<br/><br/><p><a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat 16</a> came loaded with numerous improvements and features to address both database developers' and administrators' needs. With over 100 enhancements and a brand new interface, there are more ways than ever to build, manage, and maintain your databases. One of the many improvements aimed at maximizing productivity is the ability to configure multiple connection profiles. This feature is ideal for out-of-office users who may need to switch between settings based on their location or for streamlining database access. Today's blog will outline the process of creating a new connection profile and how to switch between profiles.</p><h1 class="blog-sub-title">Connection Profiles at a Glance</h1><p>Before demonstrating how to create a connection profile, let's briefly go over what they are. Simply put, Navicat connection profiles serve as predefined configurations that store all the necessary information needed to connect to a specific database. This includes details like the host address, port number, username, password, and the specific database you want to access. By creating and saving these profiles, you can quickly establish connections without manually entering these details every time.</p><h1 class="blog-sub-title">Creating a New Connection Profile</h1><p>When we create a connection to a database instance, it becomes the Main (active) Profile for the Open Connection command. However, if we open the New Connection or Edit Connection dialog we can see a button to toggle the Connection Profile pane in the lower left corner of the dialog:</p><img alt="Toggle Connection Profiles Pane button" src="https://www.navicat.com/link/Blog/Image/2023/20231124/Toggle%20Connection%20Pane%20button.jpg" height="356" width="592" /><p>There, we will see the Main Profile listed.</p><p>We can create a connection profile by clicking the New Connection Profile link at the bottom of the Connection Profile pane:</p><img alt="New Connection Profile link (7K)" src="https://www.navicat.com/link/Blog/Image/2023/20231124/New%20Connection%20Profile%20link.jpg" height="95" width="247" /><p>That will copy over the Main Profile details to the new profile and display a text field where we can assign it a name.  We'll call it "Most Used Databases":</p><img alt="New Connection Profile name (11K)" src="https://www.navicat.com/link/Blog/Image/2023/20231124/New%20Connection%20Profile%20name.jpg" height="145" width="244" /><p>On the Databases tab, we'll deselect all but a few core databases:</p><img alt="Databases tab (97K)" src="https://www.navicat.com/link/Blog/Image/2023/20231124/Databases%20tab.jpg" height="707" width="802" /><p>Upon clicking the OK button, we are prompted by a dialog asking if we want to switch the Active Profile to the new one:</p><img alt="Switch Profile prompt (14K)" src="https://www.navicat.com/link/Blog/Image/2023/20231124/Switch%20Profile%20prompt.jpg" height="127" width="419" /><p>Let's go ahead and click "Switch". That will close the dialog and display our new Connection Profile beside the MySQL connection in the main Connections pane:</p><img alt="New Connection Profile in main connections pane (26K)" src="https://www.navicat.com/link/Blog/Image/2023/20231124/New%20Connection%20Profile%20in%20main%20connections%20pane.jpg" height="321" width="235" /><p>Now the Open Connection command will open our new Connection profile. We can easily confirm this because only the databases that we selected are shown in the Connections pane:</p><img alt="Open Most Used Databases connection (28K)" src="https://www.navicat.com/link/Blog/Image/2023/20231124/Open%20Most%20Used%20Databases%20connection.jpg" height="340" width="218" /><h1 class="blog-sub-title">Switching Connection Profiles</h1><p>Once a Connection Profile is open, we can easily switch to another via the Switch Connection Profile command. From there, we can select from any connection profile for that database instance or even create a new one!</p><img alt="switching connection profile (45K)" src="https://www.navicat.com/link/Blog/Image/2023/20231124/switching%20connection%20profile.jpg" height="203" width="646" /><p>A dialog prompt will remind us that the current connection will be closed before switching to the new one:</p><img alt="Close and switch connection profile prompt (17K)" src="https://www.navicat.com/link/Blog/Image/2023/20231124/Close%20and%20switch%20connection%20profile%20prompt.jpg" height="127" width="435" /><p>That will make the selected profile the Main one, so that Open Connection command will open it from now on.</p><h1 class="blog-sub-title">Conclusion</h1><p><a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat 16</a>'s connection profiles are a powerful tool for simplifying database management tasks. By creating and organizing profiles, you can swiftly connect to databases without the need for manual entry of connection details. As you become more adept at using Navicat, you'll discover additional features and techniques that further enhance your database management experience.</p></body></html>]]></description>
</item>
<item>
<title>Some Essential Redis Commands</title>
<link>https://www.navicat.com/company/aboutus/blog/2368-some-essential-redis-commands.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>Some Essential Redis Commands </title></head><body><b>Nov 17, 2023</b> by Robert Gravelle<br/><br/><p>Redis, the blazing-fast, in-memory data structure store, is revered for its prowess in handling key-value pairs. However, its utility extends far beyond basic key operations. In this article, we'll explore some of the most indispensable Redis commands (other than those which involve keys, since we've covered those proviously), unlocking the true potential of this powerful tool. We'll also see how to communicate directly with Redis from Navicat!</p> <h1 class="blog-sub-title">1. LPUSH and RPUSH</h1><p>Redis's versatility shines through its ability to handle complex data structures. Two of the most powerful commands in this regard are <code>LPUSH</code> and <code>RPUSH</code>, which respectively add elements to the left and right ends of a list.</p><pre><code>> LPUSH my_list "element1"(integer) 1> RPUSH my_list "element2"(integer) 2> LRANGE my_list 0 -11) "element1"2) "element2"</code></pre><p>These commands are instrumental in scenarios where you need to manage ordered data sets.</p> <h1 class="blog-sub-title">2. LPOP and RPOP</h1><p>To complement the list addition commands, Redis provides <code>LPOP</code> and <code>RPOP</code>, which respectively remove and return the first and last elements of a list.</p><pre><code>> LPOP my_list"element1"> RPOP my_list"element2"</code></pre><p>These commands are particularly useful when implementing queues or stacks.</p> <h1 class="blog-sub-title">3. SADD and SMEMBERS</h1><p>Redis sets are collections of unique elements. <code>SADD</code> adds one or more members to a set, while <code>SMEMBERS</code> retrieves all the members of a set.</p><pre><code>> SADD my_set "member1"(integer) 1> SADD my_set "member2"(integer) 1> SMEMBERS my_set1) "member1"2) "member2"</code></pre><p>Sets are powerful for scenarios requiring membership testing or storing unique data.</p> <h1 class="blog-sub-title">4. ZADD and ZRANGE</h1><p>Sorted sets in Redis provide an ordered collection of unique elements. <code>ZADD</code> adds elements with a specified score, while <code>ZRANGE</code> retrieves elements within a specified range.</p><pre><code>> ZADD my_sorted_set 1 "element1"(integer) 1> ZADD my_sorted_set 2 "element2"(integer) 1> ZRANGE my_sorted_set 0 -1 WITHSCORES1) "element1"2) "1"3) "element2"4) "2"</code></pre><p>Sorted sets are excellent for scenarios requiring ordered data retrieval.</p> <h1 class="blog-sub-title">5. HSET and HGET</h1><p>Redis hashes are maps between string field names and string values. <code>HSET</code> sets the value of a field in a hash, while <code>HGET</code> retrieves the value of a field.</p><pre><code>> HSET my_hash field1 "value1"(integer) 1> HSET my_hash field2 "value2"(integer) 1> HGET my_hash field1"value1"</code></pre><p>Hashes are ideal for scenarios involving structured data.</p> <h1 class="blog-sub-title">6. PUBLISH and SUBSCRIBE</h1><p>Redis excels not only in data storage but also in real-time messaging. The <code>PUBLISH</code> command allows a client to send a message to a channel, while the <code>SUBSCRIBE</code> command enables a client to listen to messages on a channel.</p><pre><code># Terminal 1&gt; SUBSCRIBE my_channelReading messages... (press Ctrl-C to quit)1) "subscribe"2) "my_channel"3) (integer) 1# Terminal 2&gt; PUBLISH my_channel "Hello, Redis!"(integer) 1</code></pre><p>This feature is invaluable for building real-time applications and event-driven architectures.</p> <h1 class="blog-sub-title">7. SCAN</h1><p>While not a command for direct data manipulation, the <code>SCAN</code> command is essential for iterating over keys in a Redis database without blocking the server. It provides a cursor-based approach to prevent overloading the system.</p><pre><code>&gt; SCAN 01) "0"2) 1) "my_list"   2) "my_set"   3) "my_sorted_set"   4) "my_hash"   5) "my_channel"</code></pre><p>This command is crucial for operations involving large datasets.</p> <h1 class="blog-sub-title">Executing Commands in Navicat 16 for Redis</h1><p>While you can accomplish practically everything you need to using <a class="default-links" href="https://navicat.com/en/products/navicat-for-redis" target="_blank">Navicat</a> intuitive GUI, you can issue commands directly to Redis via the Console window. It's accessible via the Tools->Console command on the main menu or the Console button on the main toolbar:</p><img alt="console_button (9K)" src="https://www.navicat.com/link/Blog/Image/2023/20231117/console_button.jpg" height="87" width="245" /><p>Here's some sample output produced by the SCAN command that we learned about above:</p><img alt="console (24K)" src="https://www.navicat.com/link/Blog/Image/2023/20231117/console.jpg" height="308" width="334" /> <h1 class="blog-sub-title">Final Thoughts on Redis Commands</h1><p>Redis commands extend far beyond the key-value operations that we've explored in recent blog entries. By mastering these advanced commands for working with data structures, sets, sorted sets, hashes, and even real-time messaging, you can harness the full potential of Redis for a wide range of applications. Whether you're building a caching layer, implementing queues, or developing real-time applications, <a class="default-links" href="https://navicat.com/en/download/navicat-for-redis" target="_blank">Navicat 16 for Redis</a> provides a robust set of tools to meet your needs.</p></body></html>]]></description>
</item>
<item>
<title>Working with JSON Documents in Redis</title>
<link>https://www.navicat.com/company/aboutus/blog/2366-working-with-json-documents-in-redis.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>Working with JSON Documents in Redis</title></head><body><b>Nov 10, 2023</b> by Robert Gravelle<br/><br/>    <p>Redis, known for its blazing fast performance, is a versatile NoSQL database that excels in handling key-value pairs. While it's primarily designed for simple data structures, Redis also supports more complex data types like lists, sets, and even JSON documents. In this blog article, we'll delve into the world of JSON documents in Redis, exploring how to work with them both through the command-line interface (CLI) and with the help of <a class="default-links" href="https://navicat.com/en/products/navicat-for-redis" target="_blank">Navicat 16 for Redis</a> on macOS.</p>    <h1 class="blog-sub-title">Understanding JSON in Redis</h1>    <p>JSON (JavaScript Object Notation) is a widely used data interchange format that's both human-readable and machine-friendly. Redis introduced native support for JSON documents in version 6.0, allowing users to store, query, and manipulate JSON data efficiently.</p>    <p>JSON documents in Redis are stored as values associated with a specific key. This allows for easy retrieval and manipulation using Redis commands.</p>    <h1 class="blog-sub-title">CLI: Interacting with JSON Documents</h1>    <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">        <li>            <h3>Storing JSON Documents</h3>            <p>To store a JSON document in Redis, you can use the <code>JSON.SET</code> command:</p>            <pre><code>JSON.SET mykey . '{"name": "John Doe", "age": 30, "email": "john@example.com"}'</code></pre>            <p>In this example, we're storing a JSON object with a name, age, and email address under the key <code>mykey</code>.</p>        </li>        <li>            <h3>Retrieving JSON Documents</h3>            <p>Retrieving a JSON document is straightforward using the <code>JSON.GET</code> command:</p>            <pre><code>JSON.GET mykey</code></pre>            <p>This will return the JSON object associated with the key <code>mykey</code>.</p>        </li>        <li>            <h3>Updating JSON Documents</h3>            <p>Updating a JSON document can be done using the <code>JSON.SET</code> command again:</p>            <pre><code>JSON.SET mykey . '{"name": "John Doe", "age": 31, "email": "john@example.com"}'</code></pre>        </li>        <li>            <h3>Querying JSON Documents</h3>            <p>Redis provides the <code>JSON.GET</code> command with a <code>path</code> argument for querying specific elements within a JSON document:</p>            <pre><code>JSON.GET mykey .name</code></pre>            <p>This will return the value of the <code>name</code> field.</p>        </li>        <li>            <h3>Deleting JSON Documents</h3>            <p>Removing a JSON document is as simple as deleting the key associated with it:</p>            <pre><code>DEL mykey</code></pre>        </li>    </ul>    <h1 class="blog-sub-title">Using Navicat for Redis</h1>    <p>While Redis CLI offers a command-line approach for working with JSON documents, using a graphical tool Navicat can significantly enhance the user experience, especially for those who prefer a more visual approach. Navicat for Redis (macOS) version 16.2.6 supports the JSON key type.</p>    <p>Navicat for Redis (macOS) version 16.2.6 Main Screen</p>    <img alt="Screenshot_Navicat_16.2_Redis_Mac_01_MainScreen (400K)" src="https://www.navicat.com/link/Blog/Image/2023/20231110/Screenshot_Navicat_16.2_Redis_Mac_01_MainScreen.png" height=auto width="1500" />    <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">        <li>            <h3>Connecting to Redis with Navicat</h3>            <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">                <li>Launch Navicat and select Connection -> Redis... from the main Toolbar.</li>                <li>Fill in the connection details (Host, Port, Authentication if required).</li>                <li>Click "Save" to establish a connection.</li>            </ul>        </li>        <li>            <h3>Navigating JSON Documents</h3>            <p>In Navicat, you can view and interact with Redis data in a structured manner. To work with JSON documents:</p>            <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">                <li>Locate the key containing the JSON document in the main "All Data" table.</li>                <li>Select the key and click the Editor button to view the key's value.</li>            </ul>        </li>        <li>            <h3>Editing JSON Documents</h3>            <p>Navicat provides a user-friendly JSON editor. You can directly modify the JSON document and save the changes.</p>        </li>    </ul>    <h1 class="blog-sub-title">Final Thoughts on Working with JSON Documents in Redis</h1>    <p>Redis' integration of JSON documents extends its capabilities beyond simple key-value pairs, opening up new possibilities for handling structured data. Whether you're a developer managing complex data structures or a data analyst querying JSON data, Redis provides a robust platform for your needs. <a class="default-links" href="https://navicat.com/en/products/navicat-for-redis" target="_blank">Navicat 16 for Redis</a> for macOS's intuitive interface will help you to navigate and manipulate JSON documents with unparalleled ease and efficiency. Its intuitive JSON editor makes Navicat an invaluable tool, particularly for those who prefer a more visual approach to database management.</p></body></html>]]></description>
</item>
<item>
<title>What Sets Redis Apart from Other Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/2364-what-sets-redis-apart-from-other-databases.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>What Sets Redis Apart from Other Databases</title></head><body><b>Nov 3, 2023</b> by Robert Gravelle<br/><br/><p>Redis, short for Remote Dictionary Server, is a versatile and high-performance key-value store that has gained significant popularity in the world of databases. It is known for its exceptional speed and efficiency in handling simple data structures. In this article, we will explore what sets Redis apart from other databases and how <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a> complements it as a robust management tool.</p><h1 class="blog-sub-title">Speed and Simplicity</h1><p>Redis distinguishes itself with its remarkable speed, primarily owing to its in-memory nature. Unlike traditional databases that rely on disk storage, Redis stores data in RAM, enabling lightning-fast read and write operations. This makes Redis an ideal choice for applications that require quick data retrieval and low latency.</p><p>For example, consider a use case where a social media platform needs to retrieve user profile information. With Redis, this operation is executed almost instantaneously due to the in-memory storage, eliminating the delays associated with disk I/O operations.</p><h1 class="blog-sub-title">Data Structures for Flexibility</h1><p>One of Redis's strengths lies in its support for a wide range of data structures, each tailored for specific use cases:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><strong>Strings:</strong> Basic key-value pairs that can store strings, integers, or floating-point numbers.        <br><strong>Example:</strong><br><code>SET user:1 "John Doe"</code></li>    <li><strong>Lists:</strong> Collections of ordered elements allowing push and pop operations from both ends.        <br><strong>Example:</strong><br><code>LPUSH mylist "item1"</code></li>    <li><strong>Sets:</strong> Unordered collections of unique elements, useful for tasks like counting unique items or creating tag systems.        <br><strong>Example:</strong><br><code>SADD tags "Redis" "Database" "NoSQL"</code></li>    <li><strong>Hashes:</strong> Maps between string field and string values, perfect for representing objects.        <br><strong>Example:</strong><br><code>HSET user:1 username "johndoe" email "john@example.com"</code></li></ul><p>These data structures empower developers to select the most suitable structure for their specific use case, resulting in optimized performance.</p><h1 class="blog-sub-title">Pub/Sub Messaging</h1><p>Redis offers robust support for Publish/Subscribe messaging, enabling real-time communication between different parts of an application or even between different applications. This feature is invaluable in scenarios requiring instant updates or notifications.</p><p>For example, in a gaming application, Redis Pub/Sub can be employed to notify players about in-game events, such as a new message or a player joining a room.</p><pre><code>PUBLISH game:updates "New message: Hello, world!"</code></pre><h1 class="blog-sub-title">Lua Scripting for Complex Operations</h1><p>Redis provides the ability to execute Lua scripts, allowing developers to perform complex operations in a single command. This is particularly useful for tasks that involve multiple steps or conditional logic.</p><p>For example, suppose you need to atomically transfer funds from one account to another while ensuring consistency. This can be accomplished with Lua scripting.</p><pre><code>local sender_balance = tonumber(redis.call('GET', KEYS[1]))local receiver_balance = tonumber(redis.call('GET', KEYS[2]))local amount = tonumber(ARGV[1])if sender_balance >= amount then    redis.call('DECRBY', KEYS[1], amount)    redis.call('INCRBY', KEYS[2], amount)    return "SUCCESS"else    return "INSUFFICIENT FUNDS"end</code></pre><h1 class="blog-sub-title">Navicat for Redis: A Comprehensive Management Tool</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a> is a powerful GUI tool designed to enhance the management and interaction with Redis databases. It provides an intuitive interface for performing various tasks such as browsing, querying, and modifying data. Here are some key features that set Navicat for Redis apart:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><strong>User-Friendly Interface</strong>: Navicat for Redis offers an intuitive and user-friendly interface, making it easy for both novice and experienced developers to navigate and interact with Redis databases.</li>    <li><strong>Visual Data Manipulation</strong>: With Navicat, users can easily view, edit, and manipulate data within Redis databases. This is particularly useful for tasks like updating values or adding new keys.</li>    <li><strong>Query Building</strong>: The tool allows users to construct and execute complex queries using a graphical interface. This can be a significant time-saver for developers who prefer a visual approach to querying.</li>    <li><strong>Data Import and Export</strong>: Navicat supports seamless data import and export operations, facilitating tasks such as migrating data between databases or creating backups.</li>    <li><strong>Task Automation</strong>: Navicat for Redis enables the scheduling and automation of routine tasks, helping to streamline database management processes.</li></ul><br> <figure>  <figcaption>Main Screen of Navicat for Redis on macOS</figcaption>  <img alt="Navicat for Redis Main Screen on macOS" src="https://www.navicat.com/link/Blog/Image/2023/20231103/Screenshot_Navicat_16.2_Redis_Mac_01_MainScreen.jpg" height="634" width="1064" /></figure> <h1 class="blog-sub-title">Final Thoughts on What Sets Redis Apart from Other Databases</h1><p>Redis stands out as a high-performance key-value store, thanks to its in-memory nature and versatile data structures. It excels in scenarios where speed and low latency are paramount. The addition of <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a> enhances the Redis experience by providing a user-friendly and efficient management tool. Navicat's features like visual data manipulation, query building, and task automation make it a valuable companion for developers working with Redis databases. Together, Redis and Navicat form a powerful combination for building robust and high-performing applications.</p></body></html>]]></description>
</item>
<item>
<title>Navicat 16.3 Adds Support for Redis Cluster</title>
<link>https://www.navicat.com/company/aboutus/blog/2362-navicat-16-3-adds-support-for-redis-cluster.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat 16.3 Adds Support for Redis Cluster</title></head><body><b>Oct 27, 2023</b> by Robert Gravelle<br/><br/><p>Navicat made headlines back in May of 2023 when the company introduced <a class="default-links" href="https://www.navicat.com/products/navicat-for-redis" target="_blank">Navicat for Redis</a>. Since then, the development team have added several notable enhancements, the most significant being support for the Redis JSON key type. Version 16.3 marks another milestone in the evolution of both <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium</a> and Navicat for Redis, which adds support for Redis Cluster. Today's blog will provide a brief overview of Redis Cluster, how to connect to server instances in Navicat, as well as list a few other features that you'll find in Navicat Premium. </p><h1 class="blog-sub-title">Redis Cluster 101</h1><p>Redis Cluster is a distributed implementation of Redis, the popular in-memory data structure store. It brings high availability and scalability to Redis setups. Introduced in Redis 3.0, it has become a crucial tool for large-scale applications.</p><p>One of its key features is automatic data sharding. Redis Cluster partitions the dataset across nodes, allowing for horizontal scaling. Each node holds a specific range of hash slots. This enables the handling of larger datasets compared to a single Redis instance.</p><p>Moreover, Redis Cluster ensures high availability through a master-slave replication model. Data is replicated across nodes, providing resilience against node failures. In case of a failure, a failover mechanism promotes a replica to a master, ensuring uninterrupted access to data.</p><p>Redis Cluster prioritizes availability and partition tolerance, making it a robust choice for distributed systems. It provides a balance between scalability and fault tolerance, making it a valuable tool for applications with demanding requirements.</p><h1 class="blog-sub-title">Connecting to Redis Cluster</h1><p>The Connection dialog now contains a Type drop-down where you can choose from a Standalone database instance or one which is part of a Cluster:</p><img alt="connection_dialog (45K)" src="https://www.navicat.com/link/Blog/Image/2023/20231027/connection_dialog.jpg" height="669" width="547" /><p>Selecting the Cluster item from the drop-down causes the Role drop-down to appear directly beneath it:</p><img alt="role_dropdown (30K)" src="https://www.navicat.com/link/Blog/Image/2023/20231027/role_dropdown.jpg" height="283" width="515" /><p>It allows you to choose between the Master database or a Replica (i.e. slave).</p><h1 class="blog-sub-title">Other New Features In Navicat Premium 16.3</h1><p>Navicat Premium 16.3 introduces a few other features, including support for the MongoDB Time-Series Collection as well as support for setting MySQL descending primary key.</p><p>New in version 5.0, the MongoDB Time-Series Collection efficiently stores sequences of measurements over a period of time. Time series data is any data that is collected over time and is uniquely identified by one or more unchanging parameters. The unchanging parameters that identify your time series data is generally your data source's metadata. Compared to normal collections, storing time series data in time series collections improves query efficiency and reduces the disk usage for time series data and secondary indexes.</p><p>Meanwhile, the MySQL descending primary key utilizes an index that stores rows in a descending order. The query optimizer will choose this type of an index when a descending order is requested by the query. This index type was introduced in MySQL 8.0.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned about some of the exciting new features in Navicat 16.3, namely support for Redis Cluster, MongoDB Time-Series Collections and MySQL descending primary keys.</p><p>Both <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat Premium 16.3</a> and <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-redis" target="_blank">Navicat for Redis 16.3</a> are available for a free trial of 14 days on Windows, macOS and Linux.</p></body></html>]]></description>
</item>
<item>
<title>Working with Strings in Redis</title>
<link>https://www.navicat.com/company/aboutus/blog/2359-working-with-strings-in-redis.html</link>
<description><![CDATA[<!DOCTYPE html><head>    <title>Working with Strings in Redis</title></head><body><b>Oct 20, 2023</b> by Robert Gravelle<br/><br/><p>Redis is a powerful open-source, in-memory data structure store that is used for various purposes such as caching, session management, real-time analytics, and more. One of the fundamental data types in Redis is strings, which can hold any kind of text or binary data, up to a maximum limit of 512 megabytes. In today's blog, we'll learn how to work with strings in Redis, both using the CLI and <a class="default-links" href="https://navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>.</p><h1 class="blog-sub-title">Using the Command Line Interface (CLI)</h1><p>Redis provides a command-line interface (CLI) that allows users to interact with the database using a set of commands. Here's how you can work with strings via the Redis CLI:</p><h3>1. Set a String</h3><p>To set a string in Redis, you can use the `SET` command. This command assigns a value to a key.</p><pre>    <code>SET my_key "Hello, Redis!"</code></pre><p>In this example, we're setting the value "Hello, Redis!" to the key `my_key`.</p><h3>2. Get a String</h3><p>To retrieve the value of a string, you can use the `GET` command.</p><pre>    <code>GET my_key</code></pre><p>This command will return the value associated with the key `my_key`, which is "Hello, Redis!" in this case.</p><h3>3. Append to a String</h3><p>The `APPEND` command is used to append a value to an existing string. If the key doesn't exist, a new key is created with the provided value.</p><pre>    <code>APPEND my_key ", How are you?"</code></pre><p>After this operation, the value of `my_key` will be "Hello, Redis!, How are you?".</p><h3>4. Get a Substring</h3><p>You can retrieve a substring from a string using the `GETRANGE` command. This command takes two arguments: the key and the range (start and end indices).</p><pre>    <code>GETRANGE my_key 0 4</code></pre><p>Executing this command will return the substring "Hello" from `my_key`.</p><h1 class="blog-sub-title">Using Navicat for Redis</h1><p>Navicat for Redis is a powerful graphical user interface (GUI) tool that provides a user-friendly environment for working with Redis databases. Here's how you can perform string operations using Navicat:</p><h3>1. Connecting to Redis</h3><p>After launching Navicat, start by creating a new connection to your Redis server. Provide the necessary connection details such as host, port, and authentication credentials if required.</p><img alt="redis_connection_details (52K)" src="https://www.navicat.com/link/Blog/Image/2023/20231020/redis_connection_details.jpg" height="707" width="562" /><h3>2. Navigating to the Keys</h3><p>Once connected, you'll see the list of databases on the left-hand side. Expand the database containing the key you want to work with and navigate to the "Keys" section.</p><img alt="redis_keys (30K)" src="https://www.navicat.com/link/Blog/Image/2023/20231020/redis_keys.jpg" height="639" width="202" /><h3>3. Setting a String</h3><p>To set a string, right-click on the "Keys" section, select "Add Key", and choose "String" from the dropdown menu.</p><img alt="setting_a_string_value (63K)" src="https://www.navicat.com/link/Blog/Image/2023/20231020/setting_a_string_value.jpg" height="581" width="729" /><p>Enter the desired key name and value, then click "Apply". The new key will appear in the Keys List:</p><img alt="new_string (34K)" src="https://www.navicat.com/link/Blog/Image/2023/20231020/new_string.jpg" height="232" width="731" /><h3>4. Getting a String</h3><p>To retrieve the value of a string, simply double-click on the key in the "Keys" section. Navicat will display the key details, including its value.</p><h3>5. Appending to a String</h3><p>Right-click on the key and select "Edit Key" from the context menu. You can then append the desired text to the existing value.</p><h1 class="blog-sub-title">Conclusion</h1><p>This blog covered how to work with strings in Redis, both using the CLI and <a class="default-links" href="https://navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>. Working with strings in Redis is a fundamental aspect of utilizing the database. Whether you choose to use the command-line interface or a GUI tool like Navicat for Redis, understanding how to set, get, append, and manipulate strings allows you to effectively manage your data.</p></body></html>]]></description>
</item>
<item>
<title>Joining Database Tables on Non-Foreign Key Fields</title>
<link>https://www.navicat.com/company/aboutus/blog/2355-joining-database-tables-on-non-foreign-key-fields.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Joining Database Tables on Non-Foreign Key Fields</title></head><body><b>Oct 13, 2023</b> by Robert Gravelle<br/><br/><p>In the world of relational databases, joining tables on foreign keys is a common and well-understood practice. However, there are situations where you need to join tables based on non-foreign key fields. This might seem unconventional, but it can be a powerful technique when used appropriately. In this article, we will explore the concept of joining database tables on non-foreign key fields, and we'll demonstrate how to do it using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>.</p><h1 class="blog-sub-title">Why Join on Non-Foreign Key Fields?</h1><p>In typical database design, tables are related using foreign keys, which establish clear relationships between data. However, there are scenarios where you might need to join tables based on fields that are not explicitly marked as foreign keys. Here are some reasons why you might consider this approach:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><strong>Data Enrichment:</strong> You may want to enrich your data by combining information from different tables based on some shared characteristics.</li>    <li><strong>Legacy Databases:</strong> In legacy databases, foreign keys may not have been established, or the schema might not follow best practices.</li>    <li><strong>Data Migration:</strong> During data migration or integration, you might need to join data from multiple sources.</li>    <li><strong>Complex Queries:</strong> Some complex analytical or reporting queries may require joining tables on non-foreign key fields.</li></ul><h1 class="blog-sub-title">Using Navicat for Non-Foreign Key Joins</h1><p>Navicat is a powerful database client that supports various database management systems like MySQL, PostgreSQL, SQL Server, and more. It provides a user-friendly interface for designing queries, making it an excellent choice for joining tables on non-foreign key fields.</p><h3>Example: Combining Customer and Order Data</h3><p>Let's consider a scenario where you have two tables: <code>Customers</code> and <code>Orders</code>. Normally, these tables would be related through a <code>CustomerID</code> foreign key field in the <code>Orders</code> table. However, in this example, we want to join them based on a shared <code>Email</code> field, which is not a foreign key.</p><p>To join the <code>Customers</code> and <code>Orders</code> tables on the <code>Email</code> field, you can use a SQL query like this:</p><pre><code>SELECT Customers.*, Orders.*FROM CustomersINNER JOIN Orders ON Customers.Email = Orders.CustomerEmail;</code></pre><img alt="join_on_email (28K)" src="https://www.navicat.com/link/Blog/Image/2023/20231013/join_on_email.jpg" height="141" width="520" /><p>In this query:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><code>Customers.*</code> and <code>Orders.*</code> select all columns from both tables.</li>    <li><code>INNER JOIN</code> combines rows with matching <code>Email</code> and <code>CustomerEmail</code> values.</li></ul><h3>Tips for Non-Foreign Key Joins</h3><p>When joining tables on non-foreign key fields, consider the following tips:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li><strong>Data Consistency:</strong> Ensure that the non-foreign key fields you're joining on have consistent data. In our example, the <code>Email</code> field should be consistently formatted and not contain missing or duplicate values.</li>    <li><strong>Indexes:</strong> Consider creating indexes on the fields you're joining on. Indexes can significantly improve query performance.</li>    <li><strong>Data Types:</strong> Ensure that the data types of the fields being joined match. For example, if you're joining on an email address, both fields should have the same data type, such as VARCHAR.</li>    <li><strong>Testing:</strong> Always thoroughly test your queries to verify that the results are as expected, especially when joining tables on non-foreign key fields.</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>Joining database tables on non-foreign key fields is a flexible and powerful technique that can help you work with data in unconventional ways. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> provides an intuitive interface for crafting SQL queries that perform these joins, making it a valuable tool for database professionals and developers.</p><p>Remember that while joining on non-foreign key fields can be useful, it should be done thoughtfully and with attention to data quality and consistency. When used appropriately, this approach can unlock new insights and possibilities in your data analysis and reporting.</p></body></html>]]></description>
</item>
<item>
<title>Expiring Keys in Redis</title>
<link>https://www.navicat.com/company/aboutus/blog/2352-expiring-keys-in-redis.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Expiring Keys in Redis</title></head><body><b>Oct 6, 2023</b> by Robert Gravelle<br/><br/><p>Redis is a high-performance, in-memory data store known for its speed and versatility. One of its useful features is the ability to set expiration times for keys. Expiring keys in Redis is crucial for managing data and ensuring that outdated or temporary data is automatically removed from the database. In this article, we'll explore how to expire keys in Redis using the redis-cli and <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a> as well as how this feature can be applied in various scenarios.</p><h1 class="blog-sub-title">Setting Expiration for a Key</h1><p>To set an expiration time for a key in Redis, you can use the <code>EXPIRE</code> or <code>SETEX</code> command. The <code>EXPIRE</code> command allows you to set an expiration time in seconds, while <code>SETEX</code> sets both the key's value and its expiration time in one command. Here's how to use both commands:</p><h3>Using EXPIRE:</h3><pre>127.0.0.1:6379&gt; SET mykey "Hello, Redis"OK127.0.0.1:6379&gt; EXPIRE mykey 60(integer) 1</pre><p>In this example, we first set the value of <code>mykey</code> to "Hello, Redis" using the <code>SET</code> command. Then, we use <code>EXPIRE</code> to set an expiration time of 60 seconds for <code>mykey</code>. After 60 seconds, the key will automatically be removed from the database.</p><h3>Using SETEX:</h3><pre>127.0.0.1:6379&gt; SETEX mykey 60 "Hello, Redis"OK</pre><p>With <code>SETEX</code>, we achieve the same result in a single command by specifying the key, the expiration time (60 seconds in this case), and the value.</p><h1 class="blog-sub-title">Checking the Time-to-Live (TTL)</h1><p>To check the remaining time until a key expires, you can use the <code>TTL</code> command. This command returns the remaining time in seconds or -2 if the key does not exist or -1 if the key exists but has no associated expiration time (it will never expire). Here's how to use it:</p><pre><code>127.0.0.1:6379&gt; TTL mykey(integer) 30</code></pre><p>In this example, we check the remaining time for <code>mykey</code>, which was set to expire after 60 seconds. The command returns 30, indicating that there are 30 seconds left until the key expires.</p><h1 class="blog-sub-title">Removing Expired Keys</h1><p>Redis automatically removes keys when their expiration time is reached. However, you can also manually delete keys using the <code>DEL</code> command. This can be useful if you want to remove a key before it expires. Here's how to use it:</p><pre><code>127.0.0.1:6379&gt; DEL mykey(integer) 1</code></pre><p>In this example, we use the <code>DEL</code> command to remove <code>mykey</code> manually. After running this command, the key will no longer exist in the database.</p><h1 class="blog-sub-title">Setting Key Expiration in Navicat</h1><p>In Navicat, the data editor includes a TTL drop-down for setting a key's expiration:</p><img alt="TTL_dropdown (74K)" src="https://www.navicat.com/link/Blog/Image/2023/20231006/TTL_dropdown.jpg" height="615" width="688" /><p>Options include "No TTL", "Expire In (seconds)" and "Expire At (Local Time)". Here's how to expire a key in 60 second:</p><img alt="expire_in_60_seconds (21K)" src="https://www.navicat.com/link/Blog/Image/2023/20231006/expire_in_60_seconds.jpg" height="316" width="684" /><p>The key's expiry information will be set when the Apply button is clicked.</p><h1 class="blog-sub-title">Common Use Cases for Expiring Keys</h1><p>Expire keys in Redis can be used in various scenarios to manage data effectively:</p><h3>1. Caching</h3><p>Redis is often used as a caching layer. By setting short expiration times for cache keys, you can ensure that the cache remains fresh and relevant, preventing the storage of stale data.</p><h3>2. Session Management</h3><p>Managing user sessions in a web application becomes more manageable with Redis. Setting session data to expire after a certain period of inactivity can help free up resources and enhance security.</p><h3>3. Rate Limiting</h3><p>Rate limiting is a common use case for API throttling. Redis can be used to count and limit the number of requests from a client within a specific time frame by expiring rate limit keys after a predefined time.</p><h3>4. Temporary Data Storage</h3><p>Redis can serve as a temporary data store for background jobs or temporary data processing. Expiring keys automatically clean up data that is no longer needed, reducing manual intervention.</p><h1 class="blog-sub-title">Conclusion</h1><p>In this article, we learned how to expire keys in Redis using the redis-cli and <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a> as well as how this feature can be applied in various scenarios. Key expiration in Redis is a powerful feature that helps manage data efficiently, ensuring that outdated or temporary data is automatically removed from the database. Whether you're using Redis for caching, session management, rate limiting, or temporary data storage, the ability to set expiration times for keys can significantly improve the performance and reliability of your applications.</p></body></html>]]></description>
</item>
<item>
<title>Comparing Database Connectivity: Navicat versus Java-based Tools</title>
<link>https://www.navicat.com/company/aboutus/blog/2350-comparing-database-connectivity-navicat-versus-java-based-tools.html</link>
<description><![CDATA[<!DOCTYPE html><html><head>    <title>Comparing Database Connectivity: Navicat versus Java-based Tools</title></head><body><b>Sep 28, 2023</b> by Robert Gravelle<br/><br/><p>In the realm of database management and development, the choice of tools can greatly impact efficiency and productivity. Java-based tools have emerged as strong contenders, offering diverse capabilities for working with databases. However, when it comes to native database connectivity, the differences between tools can be quite impactful. Let's explore how Navicat's ability to connect to databases natively sets it apart from other Java-based tools in the market.</p><h1 class="blog-sub-title">Understanding Native Database Connectivity</h1><p>Native database connectivity refers to a tool's ability to communicate directly with a database using the database system's native protocol. This eliminates intermediaries or translation layers, resulting in optimized and efficient connections. Java-based tools that support native database connectivity can harness the inherent optimizations and features provided by each database system, leading to enhanced performance and smoother workflows.</p><h1 class="blog-sub-title">The Efficiency Factor</h1><p>Navicat's standout feature is its native database connectivity, which significantly enhances efficiency. When compared to some Java-based tools, the difference becomes evident. Native connectivity eliminates the need for additional translations, resulting in faster data transfer, query execution, and overall performance. This can be crucial for managing large datasets, executing complex queries, and ensuring real-time interactions with the database.</p><h1 class="blog-sub-title">Streamlined Development Workflows</h1><p>Java-based tools that lack native database connectivity might encounter bottlenecks in development workflows. These tools often require additional steps for data translation and interpretation, leading to delays in coding and testing. Navicat's native connectivity streamlines development by directly communicating with the database system, reducing wait times, and enabling agile iterations. This agility is a boon for developers seeking optimal productivity.</p><h1 class="blog-sub-title">Accuracy in Data Manipulation</h1><p>Another area where Navicat's native connectivity shines is data manipulation. Java-based tools that rely on intermediate layers might introduce inaccuracies during data transformation and visualization. Navicat's direct interaction with the database's native format ensures accurate data previews, making it an ideal choice for tasks involving data analysis, transformation, and reporting.</p><h1 class="blog-sub-title">Security and Compatibility</h1><p>Native connectivity not only enhances efficiency but also contributes to security and compatibility. Java-based tools may require additional configurations to match the authentication and authorization mechanisms of different database systems. Navicat's native connectivity adheres to these protocols, providing enhanced security and better compatibility with the latest features and updates of supported databases.</p><h1 class="blog-sub-title">Final Thoughts</h1><p>When comparing Navicat's ability to connect to databases natively with other Java-based tools, it's clear that the former offers distinct advantages. Native database connectivity propels Navicat's efficiency, development workflows, accuracy in data manipulation, and compatibility. These benefits collectively contribute to a seamless database management and development experience.</p><p>As the field of database management continues to evolve, the importance of native connectivity becomes even more pronounced. By choosing a tool like <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, which prioritizes native connectivity, users can harness the full potential of their database systems, optimize their workflows, and ensure secure and reliable interactions with their data.</p></body></html>]]></description>
</item>
<item>
<title>Using Hashes in Redis</title>
<link>https://www.navicat.com/company/aboutus/blog/2346-using-hashes-in-redis.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Using Hashes in Redis</title></head><body><b>Sep 15, 2023</b> by Robert Gravelle<br/><br/><p>In Redis, a Hash is a data structure that maps a string key with field-value pairs. Thus, Hashes are useful for representing basic objects and for storing groupings of counters, among other things. This article will go over some of the main commands for managing hashes, both via the redis-cli and using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>.</p><h1 class="blog-sub-title">Creating and Updating a Hash</h1><p>In Redis, the key is the name of the Hash and the value represents a sequence of field-name field-value entries. For instance, we could describe a vehicle object as follows:</p><pre>vehicle make Toyota model Crown trim Platinum year 2023 color black</pre><p>To work with Hashes, we use commands that are similar to what we use with strings, since Hash field values are strings. Case in point, the command HSET sets field in the Hash to value. If key does not exist, a new key storing a hash is created. If field already exists in the hash, it is overwritten.</p><pre>HSET key field value</pre><p>For each HSET command, Redis replies with an integer as follows:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>1 if field is a new field in the hash and value was set.</li><li>0 if field already exists in the hash and the value was updated.</li></ul><p>Let's create the vehicle hash described above:</p><pre>HSET vehicle make "Toyota"   // 1HSET vehicle model "Crown"   // 1HSET vehicle trim "Platinum" // 1HSET vehicle year 2015       // 1HSET vehicle color "black"   // 1</pre><p>Now, if we update the value of the year field to 2022, HSET returns 0:</p><pre>HSET vehicle year 2022 // 0</pre><h1 class="blog-sub-title">Creating a Hash in Navicat</h1><p>In the <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>, Hash fields may be added using the built-in Editor. Clicking on the ellipsis [...] button on the right of a field opens a special Editor where you can enter individual field values:</p><img alt="vehicle_hash_in_navicat_editor (70K)" src="https://www.navicat.com/link/Blog/Image/2023/20230915/vehicle_hash_in_navicat_editor.jpg" height="614" width="659" /><p>Clicking the Apply button adds the new Hash or field.</p><h1 class="blog-sub-title">Fetching a Hash Field's Value</h1><p>We can fetch the value associated with field in a Hash using the HGET command:</p><pre>HGET key field</pre><p>For example, we can use it to verify that we are getting 2022 as the value of year instead of 2015:</p><pre>HGET vehicle year // 2022</pre><p>We can also get all hash contents (fields and values) using the HGETALL command:</p><pre>HGETALL key</pre><p>Let's try it out:</p><pre>HGETALL vehicle/* Returns:makeToyotamodelCrowntrimPlatinumyear2022colorblack*/</pre><p>HGETALL replies with an empty list when the provided key argument doesn't exist.</p><h1 class="blog-sub-title">Deleting a Field</h1><p>The HDEL command removes the specified fields from the hash stored at key. Specified fields that do not exist within this hash are ignored. HDEL returns the number of fields that were removed from the hash. If a key does not exist, it is treated as an empty hash and HDEL returns 0.</p><pre>HDEL key field [field ...]</pre><p>Let's use HDEL to delete the year and color fields:</p><pre>HDEL vehicle year color // 2</pre><p>In the Navicat Editor, we can remove a field by selecting it and clicking the Delete [-] button located under the fields list:</p><img alt="delete_button_in_navicat_editor (25K)" src="https://www.navicat.com/link/Blog/Image/2023/20230915/delete_button_in_navicat_editor.jpg" height="335" width="659" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog article highlighted some of the main commands for managing Hashes in Redis, both via the redis-cli and using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>.</p><p>Interested in giving Navicat for Redis a try. Download it <a class="default-links" href="https://navicat.com/en/download/navicat-for-redis" target="_blank">here</a>. The trial version is fully functional for 14 days.</p></body></html>]]></description>
</item>
<item>
<title>使用 Redis Hash</title>
<link>https://www.navicat.com/company/aboutus/blog/2348-使用-redis-hash.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title> Redis Hash</title></head><body><b>2023  9  15 </b> Robert Gravelle <br/><br/><p> Redis HashHash  redis-cli  <a class="default-links" href="https://www.navicat.com/cht/products/navicat-for-redis" target="_blank">Navicat for Redis</a>  Hash </p><h1 class="blog-sub-title"> Hash</h1><p> Redis key Hash value</p><pre>vehicle make Toyota model Crown trim Platinum year 2023 color black</pre><p> Hash string Hash  HSET  Hash  Hash  Hash </p><pre>HSET key field value</pre><p> HSET Redis </p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li> Hash  1</li><li> Hash  0</li></ul><p>vehicleHash</p><pre>HSET vehicle make "Toyota"   // 1HSET vehicle model "Crown"   // 1HSET vehicle trim "Platinum" // 1HSET vehicle year 2015       // 1HSET vehicle color "black"   // 1</pre><p> year  2022HSET  0</p><pre>HSET vehicle year 2022 // 0</pre><h1 class="blog-sub-title"> Navicat  Hash</h1><p> <a class="default-links" href="https://www.navicat.com/cht/products/navicat-for-redis" target="_blank">Navicat for Redis</a>  Hash  [...]</p><img alt="vehicle_hash_in_navicat_editor (70K)" src="https://www.navicat.com/link/Blog/Image/2023/20230915/CHT/vehicle_hash_in_navicat_editor.jpg" height="603" width="620" /><p> Hash </p><h1 class="blog-sub-title"> Hash </h1><p> HGET  Hash </p><pre>HGET key field</pre><p> year 20222015</p><pre>HGET vehicle year // 2022</pre><p> HGETALL  Hash </p><pre>HGETALL key</pre><p></p><pre>HGETALL vehicle/* Returns:makeToyotamodelCrowntrimPlatinumyear2022colorblack*/</pre><p>HGETALL </p><h1 class="blog-sub-title"></h1><p>HDEL  Hash  Hash  HDEL  Hash  HashHDEL  0</p><pre>HDEL key field [field ...]</pre><p> HDEL  year  color </p><pre>HDEL vehicle year color // 2</pre><p> Navicat  [-]</p><img alt="delete_button_in_navicat_editor (25K)" src="https://www.navicat.com/link/Blog/Image/2023/20230915/CHT/delete_button_in_navicat_editor.jpg" height="314" width="621" /><h1 class="blog-sub-title"></h1><p> redis-cli  <a class="default-links" href="https://www.navicat.com/cht/products/navicat-for-redis" target="_blank">Navicat for Redis</a>  Redis Hash </p><p> Navicat for Redis <a class="default-links" href="https://navicat.com/en/download/navicat-for-redis" target="_blank"></a>  14 </p></body></html>]]></description>
</item>
<item>
<title>A Quick Guide to Redis Sets</title>
<link>https://www.navicat.com/company/aboutus/blog/2344-a-quick-guide-to-redis-sets.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Quick Guide to Redis Sets</title></head><body><b>Sep 8, 2023</b> by Robert Gravelle<br/><br/><p>Redis supports several data types for storing collections of items.  These include lists, sets, and hashes. Last week's blog article covered the List data type and highlighted some of the main commands for managing them. In today's follow-up we'll be turning our attention to the set type. In Redis, a Set is similar to a List except that it doesn't keep any specific order for its elements and each element must be unique. This article will go over some of the main commands for managing sets, both via the redis-cli and using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>.</p><h1 class="blog-sub-title">Creating a  Set</h1><p>In Redis, we can create a Set by using the SADD command that adds the specified members to the key:</p><pre>SADD key member [member ...]</pre><p>As mentioned previously, each element must be unique.  For that reason, specified members that are already part of the Set are ignored. If the key doesn't exist, a new Set is created and the unique specified members are added. If the key already exists or it is not a Set, an error is returned.</p><p>Here's the command to create a "vehicles" set:</p><pre>SADD vehicles "Infiniti"         // 1SADD vehicles "Mazda"            // 1SADD vehicles "Ford" "Mercedes"    // 2SADD vehicles "Porsche" "Mercedes" // 1</pre><p>Note that the SADD command returns the number of members that were added in that statement, not the size of the Set. We can see in the last line that only one element was added as there was already a "Mercedes" value.</p><h1 class="blog-sub-title">Creating a Set in Navicat</h1><p>In the <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a> Editor, Set values are represented as a Elements. Clicking on the ellipsis [...] button on the right of the Element opens a special Editor where you can enter individual Set elements:</p><img alt="vehicles_set_in_navicat_editor (69K)" src="https://www.navicat.com/link/Blog/Image/2023/20230908/vehicles_set_in_navicat_editor.jpg" height="624" width="660" /><p>Clicking the Apply button adds the new Set or element. Navicat automatically removes duplicate values.</p><h1 class="blog-sub-title">Removing Members From a Set</h1><p>We can remove members from a Set by using the SREM command:</p><pre>SREM key member [member ...]</pre><pre>SREM vehicles "Mazda" "Mercedes" // 2SREM vehicles "Dodge" // 0</pre><p>Similar to the SADD command, SREM return the number of members that were removed.</p><p>In the Navicat Editor, we can remove any Set element by selecting it and clicking the Delete [-] button located under the Element values:</p><img alt="delete_button_in_navicat_editor (25K)" src="https://www.navicat.com/link/Blog/Image/2023/20230908/delete_button_in_navicat_editor.jpg" height="333" width="652" /><h1 class="blog-sub-title">Verifying That a Value Exists</h1><p>To verify that a member is part of a Set, we can use the SISMEMBER command:</p><pre>SISMEMBER key member</pre><p>If the member is part of the Set, this command returns 1; otherwise, it returns 0:</p><pre>SISMEMBER vehicles "Infiniti" // 1SISMEMBER vehicles "Alfa Romeo" // 0</pre><h1 class="blog-sub-title">Viewing a Set</h1><p>To show all the members that exist in a Set, we can use the SMEMBERS command:</p><pre>SMEMBERS key</pre><p>Let's see what is currently contained in the vehicles Set:</p><pre>SMEMBERS vehicles// returns "Infiniti", "Ford", "Porsche"</pre><h1 class="blog-sub-title">Merging Sets</h1><p>We can combine Sets very easily using the SUNION command:</p><pre>SUNION key [key ...]</pre><p>Each argument to SUNION represents a Set that we want to merge into a larger Set. Note that any overlapping members will be removed in order to maintain element uniqueness.</p><p>Say that we had another Set named more_vehicles that contained the values "Corvette" and "Alfa Romeo". We could view all the members of both the vehicles and more_vehicles Sets as follows:</p><pre>SUNION vehicles more_vehicles// "Infiniti", "Ford", "Porsche", "Corvette", "Alfa Romeo"</pre><h1 class="blog-sub-title">Conclusion</h1><p>This blog article highlighted some of the main commands for managing Sets in Redis, both via the redis-cli and using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="blank_">Navicat for Redis</a>.</p><p>Interested in giving Navicat for Redis a try. Download it <a class="default-links" href="https://navicat.com/en/download/navicat-for-redis" target="blank_">here</a>. The trial version is fully functional for 14 days.</p></body></html>]]></description>
</item>
<item>
<title>Navicat Wins a DBTA Readers' Choice Award!</title>
<link>https://www.navicat.com/company/aboutus/blog/2343-navicat-wins-a-dbta-readers-choice-award.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat Wins a DBTA Readers' Choice Award!</title></head><body><b>Aug 31, 2023</b> by Robert Gravelle<br/><br/><p>In a resounding testament to its commitment to excellence and innovation, Navicat has been announced as the winner of the prestigious Best DBA Solution category in the <a class="default-links" href="https://www.dbta.com/Editorial/Actions/Winners-Circle-Navicat-159974.aspx" target="_blank">2023 DBTA Readers' Choice Awards</a>. (Navicat Data Modeler was also a finalist in the Best Data Modeling Solution category.) The annual awards program, hosted by Database Trends and Applications (DBTA) magazine, celebrates outstanding products and solutions in the dynamic landscape of data management and analytics. The recognition is a result of votes and opinions from DBTA's readershipcomprising data and IT professionals hailing from diverse industries.</p><p>Comprising the perfect mix of cutting-edge technology and pragmatic utility, Navicat has solidified its place as a leading provider of natively designed database management and development solutions. The award underscores Navicat's dedication to empowering data professionals with tools that streamline database operations and enhance development endeavors.</p><p>The award-winning category, Best DBA Solution, stands as a nod to Navicat's unrivaled expertise in this realm. With Navicat's flagship product, <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>, data professionals are equipped with an all-in-one database management tool that supports a multitude of database systems, including MySQL, PostgreSQL, MariaDB, MongoDB, SQL Server, Oracle, SQLite, and now Redis. The tool's hallmark user-friendly interface combined with powerful features like data visualization, data modeling, and data synchronization have secured Navicat's spot as a trusted solution for over 50% of Fortune 500 companies, effectively meeting their diverse database management needs.</p><p>Ken Lin, the CEO of Navicat, expressed his elation and honor at receiving this prestigious recognition. "We are thrilled and honored to receive this recognition from the DBTA readers," Lin remarked. "At Navicat, we are committed to providing our customers with the best possible tools for managing and developing their databases, and this award is a testament to the hard work and dedication of our team."</p><p>One of Navicat's notable features lies in its platform-specific design approach. The tools are natively designed for specific platforms, ensuring a seamless and optimized experience that aligns naturally with the system in use. This approach offers stability, usability, and an intuitive experience for efficient database management.</p><p>Navicat's commitment to continuous innovation and improvement is evident through its product range, culminating in the recent release of Navicat 16.2, which introduces Redis compatibility  an enhancement poised to further elevate users' capabilities in the database management sphere.</p><p>As the data management landscape continues to evolve, Navicat's commitment to innovation and excellence positions it at the forefront of the industry, a trusted partner for professionals navigating the complexities of database management and development.</p></body></html>]]></description>
</item>
<item>
<title>Redis Lists: an Overview</title>
<link>https://www.navicat.com/company/aboutus/blog/2340-redis-lists-an-overview.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Redis Lists: an Overview</title></head><body><b>Aug 14, 2023</b> by Robert Gravelle<br/><br/><p>In <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2331-redis-data-types.html" target="_blank">this</a> recent blog article, we learned about Redis' six data types. Redis Lists contain a collection of strings, sorted by the same order in which they were added. This article will expand on the List data type and highlight some of the main commands for managing them.</p><h1 class="blog-sub-title">List Performance</h1><p>In Redis, it's important to note that Lists are implemented as linked lists. A linked list is one whose nodes contain a data field as well as a "next" reference (link) to the next node in the list:</p><img alt="linked_list (5K)" src="https://www.navicat.com/link/Blog/Image/2023/20230814/linked_list.png" height="289" width="673" /><p>This has some important implications regarding performance. It is fast to add elements to the head and tail of the List but it's slower to search for elements within the List as we do not have indexed access to the elements (like we do in an array).</p><h1 class="blog-sub-title">Creating a  List</h1><p>A List is created by using a Redis command that pushes data followed by a key name. There are two commands that we can use: RPUSH and LPUSH. If the key doesn't exist, these commands will return a new List with the passed arguments as elements. If the key already exists or it is not a List, an error is returned.</p><h3>RPUSH</h3><p>RPUSH inserts a new element at the end of the List (at the tail):</p><pre>RPUSH key value [value ...]</pre><p>Let's create a "guitars" key that represents a List of guitar brands:</p><pre>RPUSH guitars "Jackson" // 1RPUSH guitars "Fender"  // 2RPUSH guitars "Gibson"  // 3</pre><p>Each time we insert an element, Redis replies with the length of the List after that insertion. After the above three statements, the guitars should contain the following three elements:</p><pre>Jackson Fender Gibson</pre><h3>LPUSH</h3><p>LPUSH behaves the same as RPUSH except that it inserts the element at the front of the List (at the header):</p><pre>LPUSH key value [value ...]</pre><p>We can use LPUSH to insert a new value at the front of the guitars list as follows:</p><pre>LPUSH guitars "Ibanez" //4</pre><p>We now have four guitars, starting with "Ibanez":</p><pre>Ibanez Jackson Fender Gibson</pre><h1 class="blog-sub-title">Creating a List in Navicat</h1><p>In the <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a> Editor, list values are represented as Elements. Clicking on the ellipsis [...] button on the right of the Element opens a special Editor where you can enter individual list elements:</p><img alt="guitars_list_in_navicat_editor (66K)" src="https://www.navicat.com/link/Blog/Image/2023/20230814/guitars_list_in_navicat_editor.jpg" height="558" width="661" /><p>Clicking the Apply button adds the new list or element.</p><p>Once added, an element's position in the list may be changed using the up and down arrow buttons.</p><h1 class="blog-sub-title">Fetching List Items using LRANGE</h1><p>LRANGE returns a subset of the List based on a specified start and stop index:</p><pre>LRANGE key start stop</pre><p>We can see the full List by supplying 0 and -1 for the start and stop indexes:</p><pre>LRANGE guitars 0 -1 //returns Ibanez Jackson Fender Gibson</pre><p>Meanwhile, the following command retrieves the first two guitars:</p><pre>LRANGE guitars 0 1 //returns Ibanez Jackson</pre><h1 class="blog-sub-title">Removing Elements from a List</h1><p>LPOP removes and returns the first element of the List while RPOP removes and returns the last element of the List.  Here are some examples:</p><pre>LPOP guitars //returns Ibanez RPOP guitars //returns Gibson </pre><p>In the Navicat Editor, we can remove any List element by selecting it and clicking the Delete [-] button located under the Element values:</p><img alt="delete_button_in_navicat_editor (30K)" src="https://www.navicat.com/link/Blog/Image/2023/20230814/delete_button_in_navicat_editor.jpg" height="333" width="660" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog article highlighted some of the main commands for managing Lists in Redis, both via the redis-cli and using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>.</p><p>Interested in giving Navicat for Redis a try. Download it <a class="default-links" href="https://navicat.com/en/download/navicat-for-redis" target="_blank">here</a>. The trial version is fully functional for 14 days.</p></body></html>]]></description>
</item>
<item>
<title>Working with Keys in Redis</title>
<link>https://www.navicat.com/company/aboutus/blog/2338-working-with-keys-in-redis.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Working with Keys in Redis</title></head><body><b>Aug 7, 2023</b> by Robert Gravelle<br/><br/><p>Since Redis is a key-value store that lets us associate values with a key, it does not use the Data Manipulation Language (DML) and query syntax as relational databases.  So how to we write, read, update, and delete data in Redis?  This tutorial will cover how to write, read, update, and delete keys using the redis-cli as well as <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>.</p><h1 class="blog-sub-title">Reading Data</h1><p>We can use the GET command to ask Redis for the string value of a key:</p><pre>GET key</pre><p>Here's an example in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a> that fetches the value for the key "auth service" shown below: </p><img alt="auth_service (48K)" src="https://www.navicat.com/link/Blog/Image/2023/20230807/auth_service.jpg" height="213" width="823" /><p>As expected, it returns its value of "auth0":</p><img alt="GET_command (16K)" src="https://www.navicat.com/link/Blog/Image/2023/20230807/GET_command.jpg" height="241" width="451" /><p>However, if we try to fetch the value for "indiana_jones_episodes", we get an error "WRONGTYPE Operation against a key holding the wrong kind of value". That's because its value is a zset. Since Redis supports 6 data types, you need to know what type of value that a key maps to, as for each data type, the command to retrieve it is different.</p><p>Here are the commands to retrieve key value(s):</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>if value is of type string -&gt; GET <code>&lt;key&gt;</code></li><li>if value is of type hash -&gt; HGET or HMGET or HGETALL <code>&lt;key&gt;</code></li><li>if value is of type lists -&gt; lrange <code>&lt;key&gt; &lt;start&gt; &lt;end&gt;</code></li><li>if value is of type sets -&gt; smembers <code>&lt;key&gt;</code></li><li>if value is of type sorted sets -&gt; ZRANGEBYSCORE <code>&lt;key&gt; &lt;min&gt; &lt;max&gt;</code></li><li>if value is of type stream -&gt; xread count <code>&lt;count&gt;</code> streams <code>&lt;key&gt;</code> <code>&lt;ID&gt;</code>.</li></ul><p>So, to retrieve the values for "indiana_jones_episodes", we can use ZRANGEBYSCORE and include the min and max arguments as follows:</p><img alt="ZRANGEBYSCORE_example (26K)" src="https://www.navicat.com/link/Blog/Image/2023/20230807/ZRANGEBYSCORE_example.jpg" height="281" width="451" /><p>That returns the first three values of the sorted set.</p><h1 class="blog-sub-title">Writing and Updating Data</h1><p>In Redis, the <code>SET key Value</code> command works for both setting the initial value as well as for updates.</p><p>Of course, in Navicat, both keys and values can be modified at any time using the Editor:</p><img alt="update_example (54K)" src="https://www.navicat.com/link/Blog/Image/2023/20230807/update_example.jpg" height="507" width="664" /><h1 class="blog-sub-title">Deleting Data</h1><p>In Redis, we can use the DEL command to delete a key, along with all of its associated values.  It's syntax is:</p><pre>DEL key</pre><p>For example, the following command would delete the "auth service" key:</p><pre>DEL "auth service"</pre><p>Be careful; Redis does not ask you to confirm the operation!</p><p>In Navicat, we can delete a key by selecting it in the table and then clicking the Delete [-] button.  A dialog will ask us to confirm before proceeding with the delete, in case we happened to click it by accident!</p><img alt="delete_button (60K)" src="https://www.navicat.com/link/Blog/Image/2023/20230807/delete_button.jpg" height="529" width="658" /><h1 class="blog-sub-title">Conclusion</h1><p>In this tutorial, we learned how to write, read, update, and delete keys using the redis-cli as well as <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>. Next time, we'll learn some more commands for working with data using redis-cli commands, along with how to accomplish the same thing using Navicat.</p><p>Interested in giving Navicat for Redis a try. Download it <a class="default-links" href="https://navicat.com/en/download/navicat-for-redis" target="_blank">here</a>. The trial version is fully functional for 14 days.</p></body></html>]]></description>
</item>
<item>
<title>A Guide to Redis Pub/Sub</title>
<link>https://www.navicat.com/company/aboutus/blog/2336-a-guide-to-redis-pub-sub.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Guide to Redis Pub/Sub</title></head><body><b>Jul 26, 2023</b> by Robert Gravelle<br/><br/><p>More than a mere database, Redis can also act as a message broker that supports both publishing and subscribing (pub/sub) operations. This blog will provide a brief overview of Redis's Pub/Sub capabilities using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>.</p><h1 class="blog-sub-title">About the Publish/Subscribe Messaging Paradigm</h1><p>Pub/Sub is a pattern whereby a sender (broadcaster) cannot send messages to specific receivers (subscribers). Instead, published messages are sent over channels, without any knowledge of how many (if any) subscribers are tuning in. Subscribers then sign up for one or more channels so that they only receive messages that are of interest to them.  Decoupling publishers and subscribers in this way allows for greater scalability and makes it easier to manage the flow of information in a complex system.</p><p>Redis Pub/Sub provides a lightweight, fast, and scalable messaging solution that can be used for various purposes, such as real-time notifications, sending messages between microservices, or communicating between different parts of a single application.</p><h1 class="blog-sub-title">Message Delivery in Redis</h1><p>Redis employs an at-most-once message delivery system. As the name suggests, it means that a message will be delivered only once, if at all. As such, once the message is sent by the Redis server, it's never sent again. If the subscriber is unable to receive the message (for example, due to an error or a network outage) the message is forever lost. Much like catching your favorite show on the radio, if you happen to miss it, you're out of luck. If your application requires stronger delivery guarantees, you should use Redis Streams instead.</p><p>Moreover, Pub/Sub has no relation to the key space. This means that a message published on database 10 will be heard by a subscriber on database 1. If you need scoping, Redis suggests prefixing the channel name (i.e., prod_mychannel, test_mychannel).</p><h1 class="blog-sub-title">Publishing with Navicat for Redis</h1><p>In <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a> (or Navicat Premium), we can access the Pub/Sub screen via the Pub/Sub button on the main toolbar. From there, we can publish messages using the Publish Pane:</p><img alt="pub-sub_screen (106K)" src="https://www.navicat.com/link/Blog/Image/2023/20230726/pub-sub_screen.jpg" height="672" width="962" /><p>In Redis, channels are not explicitly created by the user. The channels are created automatically when either the first message is published or a client subscribes to them. To demonstrate, we'll open two connections with the same Redis server. Each connection will act as a different client. The first connection will subscribe to the "test_channel", while the second one will publish a message to the same channel. By doing so, we would expect our message to be delivered to the subscriber as soon as it's published.</p><p>To subscribe to a channel in Navicat, we simply need to click the Subscribe button. That will open the Subscribe Dialog:</p><img alt="subscribe_dialog (117K)" src="https://www.navicat.com/link/Blog/Image/2023/20230726/subscribe_dialog.jpg" height="694" width="962" /><p>There, we would enter the channel name - "test_channel" - and then click Subscribe.  After the dialog closes, the channel will appear in the Channels list, along with a record of the subscribe action:</p><img alt="channel_added (110K)" src="https://www.navicat.com/link/Blog/Image/2023/20230726/channel_added.jpg" height="672" width="962" /><p>To publish a message in Navicat, we would select it in the Channels list (it is the default since we only have one channel at this point), enter out message in the Message text field, and click on Publish.  At that point, we should see a notification that the message was received:</p><img alt="message_received (75K)" src="https://www.navicat.com/link/Blog/Image/2023/20230726/message_received.jpg" height="534" width="985" /><h1 class="blog-sub-title">Conclusion</h1><p>In this blog, we explored Redis's Pub/Sub capabilities using Navicat for Redis.</p><p>Interested in giving Navicat for Redis a try. Download it <a class="default-links" href="https://navicat.com/en/download/navicat-for-redis" target="_blank">here</a>. The trial version is fully functional for 14 days.</p></body></html>]]></description>
</item>
<item>
<title>Using Database Aliases</title>
<link>https://www.navicat.com/company/aboutus/blog/2334-using-database-aliases.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Using Database Aliases</title></head><body><b>Jul 10, 2023</b> by Robert Gravelle<br/><br/><p>SQL supports the use of aliases to give a table or a column a temporary name.  Not only can they save on typing, but aliases can also make your queries more readable and understandable. In today's blog, we'll learn how to incorporate aliases into our queries using <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat Premium 16.2</a>.</p><h1 class="blog-sub-title">Overview of SQL Aliases</h1><p>As mentioned in the introduction, both table and column names may be aliased. Here is the syntax for each:</p><h3>Alias Column Syntax</h3><pre>SELECT   column_name [AS] alias_name,  column_name AS 'Alias Name' -- for names with spacesFROM table_name;</pre><h3>Alias Table Syntax</h3><pre>SELECT column_name(s)FROM table_name [AS] alias_name;</pre><p>Two points to consider regarding aliases:</p><ul><li>An alias is usually preceded by the AS keyword, but it is optional. </li><li>An alias only exists for the duration of that query.</li></ul><h1 class="blog-sub-title">Table Aliases in Join Queries</h1><p>Here's a query against the Sakila Sample Database that fetches information about all copies of a particular film:</p><pre>SELECT *FROM film f   INNER JOIN inventory i ON i.film_id = f.film_idWHERE i.store_id = 1 AND f.title = "Academy Dinosaur";</pre><p>In the above query, since both the film and inventory tables contain a film_id column, they must be fully qualified, i.e., prefixed by the table name. In this case, aliases may be employed to shorten the statement.</p><p>Here is the query in Navicat along with the results:</p><img alt="film_query (80K)" src="https://www.navicat.com/link/Blog/Image/2023/20230710/film_query.jpg" height="302" width="802" /><h1 class="blog-sub-title">Column Aliases</h1><p>In the case of column names, abbreviations are often utilized to keep column names short when designing database tables. For example:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>"so_no" for "sales order number".</li><li>"qty" stands "quantity" </li> </ul><p>Here, column aliases may be employed to make the column contents more intuitive. Here's an example:</p><pre>SELECTinv_no AS invoice_no,amount,due_date AS 'Due date',cust_no 'Customer No'FROMinvoices;</pre>   <p>You can also assign column aliases to expressions, as seen below:</p><img alt="expression_alias (113K)" src="https://www.navicat.com/link/Blog/Image/2023/20230710/expression_alias.jpg" height="574" width="516" /><p>The above query selects both the current and future price of products after applying a price increase.</p><h1 class="blog-sub-title">Limitations of Column Aliases</h1><p>Since column aliases are assigned in the SELECT clause, you can only reference the aliases in the clauses that are evaluated after the SELECT clause. Hence, you cannot include aliases in the WHERE clause; doing so will result in an error:</p><img alt="alias_error (42K)" src="https://www.navicat.com/link/Blog/Image/2023/20230710/alias_error.jpg" height="240" width="742" /><p>This happens because the database evaluates the WHERE clause before the SELECT clause. Therefore, at the time it evaluates the WHERE clause, the database doesn't have the information of the NewPrice column alias.</p><p>It is however permissible to use column aliases in the ORDER BY clause because it is evaluated after the SELECT clause:</p><img alt="alias_in_order_by (113K)" src="https://www.navicat.com/link/Blog/Image/2023/20230710/alias_in_order_by.jpg" height="589" width="515" /><p>The database evaluates the clauses of the query in the following order:</p><p>FROM > SELECT > ORDER BY</p><h1 class="blog-sub-title">Table Aliases and Navicat</h1><p>In Navicat, once a table alias has been defined, it will come up in the auto-complete list.</p><img alt="alias_in_navicat (65K)" src="https://www.navicat.com/link/Blog/Image/2023/20230710/alias_in_navicat.jpg" height="239" width="713" /><p>That makes using aliases even more time saving!</p><h1 class="blog-sub-title">Final Thoughts on Using Database Aliases</h1><p>In today's blog, we learned how to incorporate aliases into our queries using <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat Premium 16.2</a>.  Aliases are an easy way to make your queries more readable and understandable, which is important because code isn't just about execution; it's also a communication mechanism.</p></body></html>]]></description>
</item>
<item>
<title>Redis Data Types</title>
<link>https://www.navicat.com/company/aboutus/blog/2331-redis-data-types.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Redis Data Types</title></head><body><b>Jun 26, 2023</b> by Robert Gravelle<br/><br/><p>One of the key features that sets Redis apart from other key-value stores is its support of numerous data types, which include strings, lists, sets, sorted sets, and hashes. This makes it easier for developers to solve problems because they tend to know which data type to use for a given task. This blog will outline the six data types supported by Redis.</p><h1 class="blog-sub-title">Strings</h1><p>Redis stores strings as a sequence of bytes. Strings in Redis are binary safe, meaning they have a fixed length rather than have it determined by one or more special terminating characters. As such, you can store anything up to 512 megabytes in one string.</p><p>In <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>, we can create a new key/value pair via Edit -> Add Key from the main menu.  That will add a new empty row in the Data View and open the Editor:</p><img alt="creating_a_string (102K)" src="https://www.navicat.com/link/Blog/Image/2023/20230623/creating_a_string.jpg" height="639" width="932" /><p>We can use the Editor to set the:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Key Name</li><li>Key Type</li><li>Value</li><li>TTL (Time To Live)</li></ul><p>Clicking the Apply button populates the new row with the contents of the Editor form.  It also shows the size of the value:</p><img alt="new_string (34K)" src="https://www.navicat.com/link/Blog/Image/2023/20230623/new_string.jpg" height="232" width="731" /><h1 class="blog-sub-title">Hashes</h1><p>In Redis, a hash is a collection of key value pairs. As such, they are a good choice for representing objects and to store groupings of counters, among other things. Every hash can store up to  2^32 - 1 field-value pairs (that's more than 4 billion!).</p><p>For hash values, the Navicat Editor employs a table with Field and Value columns:</p><img alt="hash (78K)" src="https://www.navicat.com/link/Blog/Image/2023/20230623/hash.jpg" height="494" width="724" /><h1 class="blog-sub-title">Lists</h1><p>Redis Lists are simply lists of strings, sorted by the same order in which they were added. You can add elements to a Redis List on the head or on the tail. The max length of a list is 2^32 - 1, or 4294967295, elements (that's more than 4 billion of elements per list!).</p><p>In the Navicat Editor, list values are represented as an Element. Clicking on the ellipsis [...] button on the right of the Element opens a special Editor where you can enter the complete list:</p><img alt="list_editor (72K)" src="https://www.navicat.com/link/Blog/Image/2023/20230623/list_editor.jpg" height="651" width="740" /><h1 class="blog-sub-title">Sets</h1><p>Redis Sets are an unordered collection of strings. A Set is similar to a list, except that Set doesn't allow duplicates and doesn't preserve insertion order. </p><p>Sets can be sorted as well. In a Sorted Set, every member is associated with a score, that is used in order to take the sorted set ordered, from the smallest to the greatest score. While members remain unique, the scores may be repeated.</p><p>Navicat handles Sets much in the same way as Lists.  Here's an example:</p><img alt="set_editor (76K)" src="https://www.navicat.com/link/Blog/Image/2023/20230623/set_editor.jpg" height="624" width="752" /><p>Sorted Sets are listed as "zset" in the Key Type drop-down:</p><img alt="zset_editor (23K)" src="https://www.navicat.com/link/Blog/Image/2023/20230623/zset_editor.jpg" height="204" width="722" /><h1 class="blog-sub-title">Streams</h1><p>The Redis stream data type was introduced in Redis 5.0. Streams model a log data structure but also implement several operations to overcome some of the limits of a typical append-only log. </p><p>Yes, Navicat for Redis supports the Stream data type!</p><img alt="stream_editor (24K)" src="https://www.navicat.com/link/Blog/Image/2023/20230623/stream_editor.jpg" height="205" width="723" /><h1 class="blog-sub-title">Final Thoughts on Redis Data Types</h1><p>This blog outlined the six data types supported by Redis, including the new Stream type. </p><p>Interested in giving Navicat for Redis a try. Download it <a class="default-links" href="https://navicat.com/en/download/navicat-for-redis" target="_blank">here</a>. The trial version is fully functional for 14 days.</p></body></html>]]></description>
</item>
<item>
<title>Getting Started with Redis</title>
<link>https://www.navicat.com/company/aboutus/blog/2268-getting-started-with-redis.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Getting started with Redis</title></head><body><b>Jun 16, 2023</b> by Robert Gravelle<br/><br/><p>Redis is an open source, BSD licensed, advanced key-value store, written in C. It's also referred to as a data structure server, since the keys can contain strings, hashes, lists, sets and sorted sets. This tutorial will provide the fundamentals of Redis concepts needed to start using it right away.</p>  <h1 class="blog-sub-title">Why Use Redis?</h1><p>Redis is certainly not the only key-value store to choose from. However, it does offer some advantages over its competitors. For instance:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Redis supports more data types than most other key-value data stores. Developers already know most types, including list, set, sorted set, and hashes. This makes it easy to solve problems since developers tend to know which data type to use for a task.</li><li>Redis holds its database entirely in memory, using the disk only for persistence, making it exceptionally fast. In fact, it can perform about 110,000 SETs per second, about 81,000 GETs per second!</li><li>Redis can replicate data to any number of slaves.</li><li>All Redis operations are atomic, which ensures that if two clients concurrently access the same data, Redis server will receive the updated value(s).</li><li>Redis natively supports Publish/Subscribe, making it ideal for messaging-queues. </li><li>Redis is well suited for managing any short-lived data in your application, such as web application sessions, web page hit counts, etc.</li></ul><h1 class="blog-sub-title">When Not to Use Redis</h1><p>Of course, Redis is not without its flaws. It's not your best choice if you need to minimize the chance of data loss in case of outages, such as a sudden loss of power. You can configure multiple save points, such as every five minutes and/or 100 writes against the data set. However should Redis stop working without a proper shutdown for any reason, you should be prepared to lose the latest several minutes of data.</p><p>Another issue is that Redis often needs to fork a child process in order to persist data to disk. This can consume a lot of system resources if the dataset is large, and may result in an interruption of service for clients ranging from a few milliseconds to a full second, depending on dataset size and CPU power. </p><h1 class="blog-sub-title">Installing Redis</h1><p>How you install Redis depends on your operating system and whether you'd like to install it bundled with Redis Stack and Redis UI. The official Redis site has guides for every O/S:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li><a class="default-links" href="https://redis.io/docs/getting-started/installation/install-redis-on-linux" target="_blank">Install Redis on Linux</a></li>    <li><a class="default-links" href="https://redis.io/docs/getting-started/installation/install-redis-on-mac-os" target="_blank">Install Redis on macOS</a></li>    <li><a class="default-links" href="https://redis.io/docs/getting-started/installation/install-redis-on-windows" target="_blank">Install Redis on Windows</a></li>    <li><a class="default-links" href="https://redis.io/docs/stack/get-started/install" target="_blank">Install Redis with Redis Stack and RedisInsight</a></li>    <li><a class="default-links" href="https://redis.io/docs/getting-started/installation/install-redis-from-source" target="_blank">Install Redis from Source</a> (requires C compiler and libc)</li></ul><p>Once you have Redis up and running, and can connect using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-redis" target="_blank">Navicat for Redis</a>, and continue with the tutorial below.</p><h1 class="blog-sub-title">Exploring the Redis CLI</h1><p>Navicat for Redis includes a console, which allows you to communicate directly with a database instance:</p><img alt="console (42K)" src="https://www.navicat.com/link/Blog/Image/2023/20230616/console.jpg" height="251" width="663" /><p>One advantage to using the CLI in Navicat is that it provides auto-completion on every aspect of CLI commands, including command names as well as their parameters:</p><img alt="auto-complete_in_console (38K)" src="https://www.navicat.com/link/Blog/Image/2023/20230616/auto-complete_in_console.jpg" height="370" width="442" /><h1 class="blog-sub-title">Conclusion</h1><p>This tutorial provided the fundamentals of Redis concepts needed to start using it right away. There will be plenty more articles on Redis in the coming weeks, so be sure to check back often!</p></body></html>]]></description>
</item>
<item>
<title>Introducing Navicat for Redis!</title>
<link>https://www.navicat.com/company/aboutus/blog/2266-introducing-navicat-for-redis.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Introducing Navicat for Redis!</title></head><body><b>Jun 9, 2023</b> by Robert Gravelle<br/><br/><p>Version 16.2 of <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> added several exciting new features to an already stellar product, the most noteworthy being Redis support. Now, there is a Navicat administration and development client specifically for Redis. Navicat for Redis offers users an easy-to-access visual interface to visualize and optimize Redis data. It includes a rich set of features for making routine management tasks simpler, easier and more efficient than ever before. It can connect to any local/remote Redis server, and is compatible with cloud databases like Redis Enterprise Cloud, Amazon ElastiCache, Google Memorystore and Microsoft Azure. This blog will outline some of Navicat for Redis's most important features.</p>  <h1 class="blog-sub-title">Data Viewer</h1><p>Beyond a simple key/value store, Redis is actually a data structures server with support for many different kinds of values. Viewing such complex data structures can be a challenge, but not with Navicat for Redis's data viewer; you can use it to view, edit, search and sort keys and data via the classic spreadsheet-like Grid View, Tree View, all with a built-in editor. Navicat provides you with the tools you need to manage your data in both a smooth and efficient manner.</p><p>Data view in Windows:</p><img alt="data_view (155K)" src="https://www.navicat.com/link/Blog/Image/2023/20230609/data_view.jpg" height="509" width="845" /><h1 class="blog-sub-title">Query Editing</h1><p>As with other Navicat products, Navicat for Redis lets you create, edit and run queries within the Query Editor, all without having to worry about syntax and proper usage of commands. It helps you to code quickly thanks to Code Completion, which gives you suggestions for keywords and reduces the repetition in coding. </p><p>Query Editor:</p><img alt="Screenshot_Navicat_16.2_Redis_Windows_03_Query (126K)" src="https://www.navicat.com/link/Blog/Image/2023/20230609/Screenshot_Navicat_16.2_Redis_Windows_03_Query.png" height=auto width=850 /><p>You can also save queries and commands as Snippets that you can reuse over and over again.</p><h1 class="blog-sub-title">Pub/Sub</h1><p>The Pub/Sub Tool allows you to send messages and subscribe to specific channels using a simple and intuitive UI. You can save the channels as a profile, or assign colors to the channels to easily distinguish the corresponding channels and their messages. You can choose from active channels, custom channels, and custom patterns. </p><p>Pub/Sub Tool:</p><img alt="Screenshot_Navicat_16.2_Redis_Windows_04_PubSub (366K)" src="https://www.navicat.com/link/Blog/Image/2023/20230609/Screenshot_Navicat_16.2_Redis_Windows_04_PubSub.png" height=auto width="1000" /><h1 class="blog-sub-title">Collaboration</h1><p>Navicat for Redis includes Navicat Cloud, which allows you to synchronize your connection settings, queries, snippets and virtual group information to the cloud. The Navicat Cloud service provides real-time access to these, and share them with your coworkers anytime and anywhere.</p><p>Navicat Cloud:</p><img alt="Screenshot_Navicat_16.2_Redis_Windows_11_NavicatCloud (410K)" src="https://www.navicat.com/link/Blog/Image/2023/20230609/Screenshot_Navicat_16.2_Redis_Windows_11_NavicatCloud.png" height=auto width="1000" /><h1 class="blog-sub-title">Dark Mode</h1><p>The dark theme helps protect your eyes from the brightness of the standard "white" computer themes. Most importantly, dark mode only affects how application screens look, and does not alter their behavior in any way.</p><p>Redis data in dark mode</p><img alt="dark_mode (163K)" src="https://www.navicat.com/link/Blog/Image/2023/20230609/dark_mode.jpg" height="555" width="831" /><h1 class="blog-sub-title">User Management</h1><p>Users and their associated permissions can be managed using an intuitive interface. There, you can create, edit and delete users in minutes without having to type commands, as well as easily create new privilege groups to apply multiple sets of rules to a user all at once.</p><p>User management screen:</p><img alt="Screenshot_Navicat_16.2_Redis_Windows_05_User_Management (96K)" src="https://www.navicat.com/link/Blog/Image/2023/20230609/Screenshot_Navicat_16.2_Redis_Windows_05_User_Management.png" height=auto width="1000" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog outlined just some of Navicat for Redis's most important features. There are plenty of others, including Backup/Restore facilities, Automation, Secure Connections, and more.<a class="default-links" href="https://www.navicat.com/en/download/navicat-for-redis" target="_blank">Cross-platform licensing is now available</a>. Whether you're a Windows, macOS, or Linux user, you can purchase once and select a platform to activate and transfer your license at a later date if needed.</p></body></html>]]></description>
</item>
<item>
<title>A Guide to MySQL Foreign Key Constraints</title>
<link>https://www.navicat.com/company/aboutus/blog/2254-a-guide-to-mysql-foreign-key-constraints.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Guide to MySQL Foreign Key Constraints</title></head><body><b>Jun 2, 2023</b> by Robert Gravelle<br/><br/><p>During the process of normalization, groups of fields that represent a distinct entity are removed from a larger and/or more central table to a separate one. Common fields (usually IDs) are then employed to maintain their relationship.  We can see an example below: </p><img alt="film_id_fk (34K)" src="https://www.navicat.com/link/Blog/Image/2023/20230602/film_id_fk.jpg" height="417" width="222" /><p>In relational database, referential integrity between tables is enforced using foreign key constraints. </p><p>This blog will cover how foreign keys work as well as how to create a foreign key constraint in MySQL using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat 16 for MySQL </a>. </p>  <h1 class="blog-sub-title">About the Film and Inventory Relation</h1><p>The model that we saw in the intro depicts a one-to-many relationship between the film and inventory tables whereby a film entity (1 row) may link to zero or more entities (rows) in the inventory table. </p><p>The film table is called the parent table or referenced table, and the inventory table is known as the child table or referencing table.  As such, the foreign key columns of the child table often refer to the primary key columns of the parent table.</p><p>In this example, we are only focusing on one relation. In fact, a table can have more than one foreign key where each foreign key references to a primary key of the different parent tables. </p><p>Once a foreign key constraint is in place, the foreign key columns from the child table must have the corresponding row in the parent key columns of the parent table or values in these foreign key columns must be NULL. For example, each row in the inventory table has a film_id that exists in the film_id column of the film table. Multiple rows in the inventory table can have the same film_id.</p><p>In the next section we'll create a Foreign Key Constraint for this relationship in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat 16 for MySQL</a>.</p><h1 class="blog-sub-title">Creating a Foreign Key Constraint in Navicat</h1><p>In Navicat, you'll find Foreign Key Constraints on the Foreign Keys tab of the Table Designer.  To create a new Foreign Key Constraint, open the parent table - in this case film - in the Table Designer and click the Add Foreign Key button.  That will create a new row in the Foreign Keys list:</p><img alt="new_fk_on_film_table (39K)" src="https://www.navicat.com/link/Blog/Image/2023/20230602/new_fk_on_film_table.jpg" height="149" width="780" /><p>Next, select the "film" table from the Fields drop-down, the "inventory" table from the Referenced Table drop-down and the "film_id" for the Referenced Fields: </p><img alt="new_fk_on_film_table_with_fields_populated (44K)" src="https://www.navicat.com/link/Blog/Image/2023/20230602/new_fk_on_film_table_with_fields_populated.jpg" height="149" width="774" /><p>The next step is to choose the On Delete and On Update actions. MySQL supports five different referential options, as follows:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>CASCADE: It is used when we delete or update any row from the parent table, the values of the matching rows in the child table will be deleted or updated automatically.</li><li>SET NULL: It is used when we delete or update any row from the parent table, the values of the foreign key columns in the child table are set to NULL.</li><li>RESTRICT: It is used when we delete or update any row from the parent table that has a matching row in the reference(child) table, MySQL does not allow to delete or update rows in the parent table.</li><li>NO ACTION: It is similar to RESTRICT. But it has one difference that it checks referential integrity after trying to modify the table.</li><li>SET DEFAULT: The MySQL parser recognizes this action. However, the InnoDB and NDB tables both rejected this action.</li></ul><p>Let's follow the example of the existing FK and choose an On Delete action of RESTRICT and an On Update action of CASCADE:</p><img alt="new_fk_on_film_table_with_action_fields_populated (46K)" src="https://www.navicat.com/link/Blog/Image/2023/20230602/new_fk_on_film_table_with_action_fields_populated.jpg" height="150" width="795" /><p>Finally, click the Save button to create the new Foreign Key Constraint.  Note that Navicat will create the name for you if you do not populate the Name field.</p><h1 class="blog-sub-title">Conclusion</h1><p>Foreign Keys play an essential role in maintain referential integrity between tables.  As such, one should be created for every table relationship. <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat 16 for MySQL</a> makes it quite easy to manage your Foreign Key Constraints without having to write any SQL commands.</p></body></html>]]></description>
</item>
<item>
<title>Creating Views in Navicat 16</title>
<link>https://www.navicat.com/company/aboutus/blog/2251-creating-views-in-navicat-16.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Creating Views in Navicat 16</title></head><body><b>May 19, 2023</b> by Robert Gravelle<br/><br/><p>As part of the process of normalizing database tables, redundant columns are extracted from higher level tables into separate subsidiary ones.  This often occurs due to some fields having a one to many relationship with the parent entity. For example, take the following model that was generated using <a class="default-links" href="https://www.navicat.com/en/products/navicat-data-modeler" target="_blank">Navicat Data Modeler</a>:</p><img alt="ups_model (189K)" src="https://www.navicat.com/link/Blog/Image/2023/20230519/ups_model.jpg" height="760" width="899" />  <p>Appraisals were initially part of the ups table, but this led to data redundancy because there can be multiple vehicles appraised in one visit. Therefore, it made sense to remove the vehicle fields from the ups table and place them in their own table.</p><p>The drawback to normalization to third normal form (3NF) is that you wind up with a lot of ID fields in the main table. As a database practitioner looking at a table, it becomes very challenging to know what entity each ID column points to. As an illustration, take a look at the ups table from the above model diagram, and notice how the CSRs, customers, and vehicles have all been reduced to numeric IDs that don't help identify the underlying entities in any way: </p><img alt="ups_table (195K)" src="https://www.navicat.com/link/Blog/Image/2023/20230519/ups_table.jpg" height="685" width="694" /><p>This is partially related to the use of auto-incrementing IDs as well as normalization, but, in any event, we can make the data much easier to read by creating a view. A database view is a subset of a database and is based on a query that runs on one or more database tables. Database views are saved in the database as named queries and can be used to save frequently used, complex queries. </p><p>In <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium"  target="_blank">Navicat 16</a>, we can create a new view by choosing File -> New -> View... from the main menu:</p><img alt="new_view_menu_command (46K)" src="https://www.navicat.com/link/Blog/Image/2023/20230519/new_view_menu_command.jpg" height="370" width="394" /><p>That will add a new view tab.</p><p>The next step is to add the SQL statement that will generate the view fields:</p><img alt="view_definition (43K)" src="https://www.navicat.com/link/Blog/Image/2023/20230519/view_definition.jpg" height="272" width="412" /><p>If you need any help in writing your statement, there are Preview, Explain, View Builder, and Beautify SQL buttons on the tab toolbar.</p><p>Let's say we don't want to wait for the view to be created before viewing the results, we can click the Preview button to see it now:</p><img alt="view_preview (207K)" src="https://www.navicat.com/link/Blog/Image/2023/20230519/view_preview.jpg" height="825" width="602" /><p>Now the ID columns contain more descriptive - and meaningful - textual data .</p><p>Under the tab buttons, there are three more tabs - Definition, Advanced, and SQL Preview. The Advanced tab contains additional options such as the Algorithm, Definer, Security, and Check option, while the SQL Preview shows the generated CREATE VIEW statement:</p><img alt="sql_preview (30K)" src="https://www.navicat.com/link/Blog/Image/2023/20230519/sql_preview.jpg" height="180" width="412" /><p>The new view is named `Untitled` until we save it.  At that point, a dialog appears in which we can specify the View Name:</p><img alt="save_as_dialog (46K)" src="https://www.navicat.com/link/Blog/Image/2023/20230519/save_as_dialog.jpg" height="338" width="574" /><p>Upon saving, the new view will be added to the Navigation Pane on the left-hand side and may be summoned at any time:</p><img alt="ups_view_in_object_pane (20K)" src="https://www.navicat.com/link/Blog/Image/2023/20230519/ups_view_in_object_pane.jpg" height="337" width="207" /><h1 class="blog-sub-title">Final Thoughts on Creating Views in Navicat 16</h1><p>In today's blog, we learned about database views and went through the process of making one to help identify records in a table that links to a number of dependent tables via ID fields.</p></body></html>]]></description>
</item>
<item>
<title>Multi-Version Concurrency Control in PostgreSQL</title>
<link>https://www.navicat.com/company/aboutus/blog/2249-multi-version-concurrency-control-in-postgresql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Multi-Version Concurrency Control in PostgreSQL</title></head><body><b>May 12, 2023</b> by Robert Gravelle<br/><br/><p>Whereas most database systems employ locks for concurrency control, PostgreSQL does things a little differently: it maintains data consistency by using a multi-version model, otherwise known as Multi-Version Concurrency Control, or MVCC for short. As a result, when querying a database, each transaction sees a snapshot of data as it was some time before, regardless of the current state of the underlying data. This prevents the transaction from viewing inconsistent data that could be caused by other concurrent transaction updates on the same data, and provides transaction isolation for each database session. This blog article will provide a brief overview of how the MVCC protocol works as well as cover some of the pros and cons of the MVCC approach.</p>  <h1 class="blog-sub-title">The MVCC Protocol Explained</h1><p>The main difference between lock models and MVCC is that the latter ensures that reading never blocks writing and writing never blocks reading.</p><p>In MVCC, every transaction has a transaction-timestamp that indicates (a) when it started and (b) when a transaction updates a certain data-item, such as a field, or a record, or a table. A new version of this data-item is created while  also retaining the older version. Each version is provided with:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>a write-timestamp to indicate the timestamp of the transaction (i.e. the time the transaction started) that created it and</li><li>a read-timestamp to indicate the latest timestamp of all the transactions that have read it.</li></ul><p>The basic idea of the MVCC protocol is that the transaction manager only allows operations if they can be allowed in a manner that is consistent with all transactions executing in their entirety at the moment of their timestamp. This is referred to as the presumed execution order. Database researcher <a class="default-links" href="https://www.bbk.ac.uk/our-staff/profile/9255599/jan-hidders" target="_blank">Jan Hidders</a> explains how the transaction manager accomplishes this as follows:</p><blockquote><p>If a transaction wants to read an item, it is given access to the version that it would have read in the presumed execution order. This will be the one with the latest write-timestamp just before its own timestamp. For example, if there are versions with write-timestamps 5, 12 and 20, and the timestamp of the transaction is 14, then the version with write-timestamp 12 is the one read by this transaction in the presumed execution order.</p><p>If a transaction wants to write an item, it is checked if there is not a read operation that was allowed earlier and that in the presumed execution order would read the new version caused by the requested write operation, but when it was allowed read another version. For example, assume again we have versions with write-timestamps 5, 10 and 16. Moreover assume the read-timestamps of these versions are 8, 12 and 20, respectively. If a transaction with timestamp 11 wants to update the item, there is a problem, because the version with write-timestamp 10 was read by a transaction with timestamp 12. So, if a version with timestamp 11 is created, the transaction with timestamp 12 would in the presumed execution order not have seen the version created by the transaction with timestamp 10, but the one that is now about to be created with timestamp 11. If, on other hand, a transaction with timestamp 14 wants to write the item, this is fine, since as far as we know after t=12 in the presumed execution order the item was not read by any transaction until the moment it was updated at t=16.</p></blockquote><h1 class="blog-sub-title">Pros and Cons of MVCC</h1><p>Pros:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>As you can tell from the description above, all read operations will always be allowed immediately. This is usually not the case in a lock-based approach, where read-locks might be refused because of existing write-locks.</li><li>It tends to allow also more write operations to go through immediately than lock-based approaches usually do.</li></ul><p>Cons</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>If a write operation is refused, there is no alternative but rolling back or restarting the transaction: once the update is refused it will also be refused if we retry it later. This differs from lock-based approaches, where we usually can wait until the lock becomes available. It is for this reason that MVCC is categorised as an optimistic protocol: it is very efficient if there are no conflicts, but once there is one you may have to undo a lot of work.</li><li>The many versions of an item might require significantly more storage space. In lock-based approaches only the one version needs to be stored.</li><li>The removal of versions that are no longer needed can cause some overhead.</li></ul><h1 class="blog-sub-title">Final Thoughts on Multi-Version Concurrency Control in PostgreSQL</h1><p>This blog article provided an overview of how the MVCC protocol works and presented a few of its pros and cons.</p><p>Interested in working with PostgreSQL?  You can try <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql"  target="_blank">Navicat 16 for PostgreSQL</a> for FREE for 14 days!</p></body></html>]]></description>
</item>
<item>
<title>Setting Query Timeouts in PostgreSQL</title>
<link>https://www.navicat.com/company/aboutus/blog/2237-setting-query-timeouts-in-postgresql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Setting Query Timeouts in PostgreSQL</title></head><body><b>May 5, 2023</b> by Robert Gravelle<br/><br/><p>At the top of <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor 3</a>'s Query Analyzer screen, there's a chart that shows queries with the longest wait times: </p><img alt="Screenshot_Navicat_Monitor_LongRunningQueries (102K)" src="https://www.navicat.com/link/Blog/Image/2023/20230505/Screenshot_Navicat_Monitor_LongRunningQueries.png" height="670" width="836" /><p>It's essential to identify laggard queries because they can bring everything crashing to a crawl. </p>      <p>Besides fixing a slow query once it's been identified, another strategy might include limiting query execution times across the board. In professional grade databases such as PostgreSQL, there are settings to cap query execution time for the entire database or even per user, via the statement_timeout variable. In this blog, we'll learn how to work with this important database variable in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat 16 For PostgreSQL</a>. </p><h1 class="blog-sub-title">Setting the statement_timeout Variable at the Database Level</h1><p>Setting a default statement timeout for your database is an excellent starting point. This ensures that any application or person connecting to the database will not have queries running longer than that. A sane default would be either 30 or 60 seconds, but you can go higher if you wish. Here a statement that sets a value of 60 seconds:</p><pre>ALTER DATABASE mydatabase SET statement_timeout = '60s';</pre><p>In <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat 16 For PostgreSQL</a> we can view the statement_timeout via Tools > Server Monitor > PostgreSQL from the main menu. You'll find it on the Variables tab: </p><img alt="statement_timeout_variable (75K)" src="https://www.navicat.com/link/Blog/Image/2023/20230505/statement_timeout_variable.jpg" height="465" width="630" /><p>In fact, you may want to employ the Find tool in order to pinpoint the statement_timeout variable, as there are many! You can click the Highlight All toggle button to better help identify the variable once matched.</p><p>Of course the Show statement works as well:</p><img alt="show_statement (9K)" src="https://www.navicat.com/link/Blog/Image/2023/20230505/show_statement.jpg" height="107" width="222" /><h1 class="blog-sub-title">Setting a Query Timeout for a Specific User</h1><p>For even more fine grained control, we can set a query timeout value for a specific user (you know, the one who always selects the entire database!). This is achieved using the ALTER ROLE statement, which can set many database variables, including statement_timeout.</p><p>To try it out, let's create a new user role called "guest":</p><img alt="guest_role (42K)" src="https://www.navicat.com/link/Blog/Image/2023/20230505/guest_role.jpg" height="447" width="422" /><p>Now we can use the ALTER ROLE statement to limit query execution time as follows:</p><pre>ALTER ROLE guest SET statement_timeout='5min';</pre><p>We can query the pg_roles table to obtain information about the statement_timeout (including how it was set):</p><img alt="select_rolconfig (33K)" src="https://www.navicat.com/link/Blog/Image/2023/20230505/select_rolconfig.jpg" height="249" width="433" /><p>The rolconfig value is an array, so we can <i>unnest</i> it to get one setting per row:</p><img alt="select_rolconfig_unnest (20K)" src="https://www.navicat.com/link/Blog/Image/2023/20230505/select_rolconfig_unnest.jpg" height="151" width="382" /><h1 class="blog-sub-title">Final Thoughts on Setting Query Timeouts in PostgreSQL</h1><p>It's crucial to be able to identify laggard queries because they can bring your database performance down to a crawl. For that, there's <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor"  target="_blank">Navicat Monitor 3</a>'s Long Running Queries chart at the top of the Query Analyzer screen. </p><p>Another approach is to limit how long a query can execute before it times out.  As we saw in today's blog, in PostgreSQL, this can be done at the database, session, and even at the individual role level. </p><p>If you haven't already setup your statement_timeout variable(s), I would encourage you to do so ASAP. This is just one component of proper database tuning that will help to ensure your database instance stays healthy and available.</p><p>Interested in giving Navicat 16 For PostgreSQL a try?  You can download the fully functioning application <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-postgresql" target="_blank">here</a> to get a free 14 day trial!</p></body></html>]]></description>
</item>
<item>
<title>Implement Audit Trail Logging Using Triggers</title>
<link>https://www.navicat.com/company/aboutus/blog/2235-implement-audit-trail-logging-using-triggers.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Implement Audit Trail Logging Using Triggers</title></head><body><b>Apr 28, 2023</b> by Robert Gravelle<br/><br/><p>The idea behind database auditing is to know who accessed your database tables and when, along with what modifications were made to them. It's not only considered to be the standard minimum requirement for any enterprise level application, but is also a legal requirement for many domains such as banking and cybersecurity. Database Audit Trails are essential in investigating all sorts of application issues such as unauthorized access, problematic configuration changes, and many more.</p><p>In today's blog, we're going to add logging to the MySQL  <a class="default-links" href="https://www.postgresqltutorial.com/postgresql-getting-started/postgresql-sample-database/" target="_blank">Sakila Sample Database</a>  to audit the rental table. It's a key table because the database represents the business processes of a DVD rental store.</p><h1 class="blog-sub-title">Creating a Table to Store Audit Trail Data</h1><p>Ideally it's best to have an audit table for each table being audited. Here's the DDL statement to create the audit trail table for the rental table:</p><p>create table rental_audit_log(  id int NOT NULL AUTO_INCREMENT,   rental_id int NOT NULL,   old_values varchar(255),  new_values varchar(255),  done_by varchar(255) NOT NULL,  done_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,  PRIMARY KEY (id));</p><p>Alternatively, in Navicat, you can use the Table Designer to specify all of the fields and attributes without having to write a DDL statement:</p><img alt="rental_audit_trail_table_design (53K)" src="https://www.navicat.com/link/Blog/Image/2023/20230428/rental_audit_trail_table_design.jpg" height="329" width="632" /><h1 class="blog-sub-title">Creating the Audit Logging Triggers</h1><p>We'll need to create 3 database triggers to insert records in the rental_audit_log table, one for each type of DML statement performed on the rental table (INSERT, UPDATE, and DELETE).</p><h3>AFTER INSERT Trigger</h3><p>INSERT statements on the rental table will be intercepted by the rental_insert_audit_trigger.  We'll get it to fire AFTER Insert operations and provide all of the new data as a JSON_OBJECT.  In Navicat, all of those details may be supplied on the Triggers tab of the Table Designer:</p><img alt="AFTER_INSERT_Trigger (62K)" src="https://www.navicat.com/link/Blog/Image/2023/20230428/AFTER_INSERT_Trigger.jpg" height="484" width="539" /><p>After adding a new row to the rental table, we can see a new record in the rental_audit_log as well:</p><img alt="rental_audit_log_entry (50K)" src="https://www.navicat.com/link/Blog/Image/2023/20230428/rental_audit_log_insert_entry.jpg" height="194" width="780" /><h3>AFTER UPDATE Trigger</h3><p>UPDATE statements on the rental table will be captured by the following rental_update_audit_trigger:</p><img alt="AFTER_UPDATE_Trigger (84K)" src="https://www.navicat.com/link/Blog/Image/2023/20230428/AFTER_UPDATE_Trigger.jpg" height="574" width="472" /><p>Now, every time a rental record is updated, the rental_update_audit_trigger is executed, and a rental_audit_log row will be created to capture both the old and the new state of the modified record.  In this case, we can see that user robg changed the rental_date from "2005-05-25 17:17:04" to "2005-05-31 19:47:04":</p><img alt="rental_audit_log_update_entry (52K)" src="https://www.navicat.com/link/Blog/Image/2023/20230428/rental_audit_log_update_entry.jpg" height="218" width="647" /><h3>AFTER DELETE Trigger</h3><p>To track DELETE statements on the rental table, we will create the following rental_delete_audit_trigger:</p><img alt="AFTER_DELETE_Trigger (69K)" src="https://www.navicat.com/link/Blog/Image/2023/20230428/AFTER_DELETE_Trigger.jpg" height="488" width="466" /><p>In this case, only the old_values column is set since there is no new record state. Hence the empty new_values column in the generated rental_audit_log row:</p><img alt="rental_audit_log_delete_entry (46K)" src="https://www.navicat.com/link/Blog/Image/2023/20230428/rental_audit_log_delete_entry.jpg" height="220" width="645" /><p>Here, we can see that user fsmith deleted record 1114 from the rental table on 2023-03-22 at 08:46:07.</p><h1 class="blog-sub-title">Final Thoughts on Audit Trail Logging Using Triggers</h1><p>In today's blog, we added logging to the MySQL Sakila Sample Database to audit the rental table. Our logging table included some of the most common audit fields. Some organizations include others, such as the DML operation type, while others only include changed fields. It's really whatever works best for the organization. </p></body></html>]]></description>
</item>
<item>
<title>Navicat Nominated for Readers Choice Awards!</title>
<link>https://www.navicat.com/company/aboutus/blog/2234-navicat-nominated-for-readers-choice-awards.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat Nominated for Readers Choice Awards!</title></head><body><b>Apr 25, 2023</b> by Robert Gravelle<br/><br/><p>Once again, Navicat has been nominated for the prestigious Database Trends and Applications (DBTA) Readers Choice Awards in the following categories:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Best DBA Solution (Navicat)</li><li>Best Data Modeling Solution (Navicat Data Modeler)</li><li>Best Database Development Solution (Navicat for MySQL)</li><li>Best Database Performance Solution (Navicat Monitor)</li></ul><p><strong>Right now is the time to vote for Navicat as the voting period will only be open until Wednesday May 10, 2023.</strong></p><p>Winners will be showcased in a special section on the DBTA website and in the August 2023 issue of Database Trends and Applications magazine.</p><h1 class="blog-sub-title">More on Navicat and Nominated Products</h1><p>Navicat is owned by PremiumSoft. Founded in 1999, PremiumSoft has developed a wide variety of applications for Windows, macOS, Linux and iOS. Navicat is the choice of over 5 million database users all around the world. Over 180,000 registered customers across 7 continents and 138 countries have chosen our products.  More than 50% of the Fortune 500 rely on Navicat every day. Some notable customers include Apple Inc., Google Inc., Oracle, Intel, Microsoft, Fujitsu, Accenture, HP, IBM, Ebay, Samsung, Sony, JP Morgan, KPMG, Barclays, DHL, Federal Express, General Electric, and many more.</p><p>Navicat Data Modeler 3 is a powerful and cost-effective database design tool which helps you build high-quality conceptual, logical and physical data models. It supports various database systems, including MySQL, MariaDB, Oracle, SQL Server, PostgreSQL, and SQLite. </p><p>Navicat 16 for MySQL is the ideal solution for MySQL/MariaDB administration and development.</p><p>Navicat Monitor 3 is a safe, simple and agentless remote server monitoring tool that is packed with powerful features to make your monitoring effective as possible. Monitored servers include MySQL, MariaDB, PostgreSQL and SQL Server.</p><h1 class="blog-sub-title">Winner in 2022!</h1><p>All of the votes from satisfied Navicat customers helped Navicat take the <a href="https://www.dbta.com/Editorial/Trends-and-Applications/DBTA-Readers-Choice-Awards-Winners-2022-154324.aspx?PageNum=4" target="_blank">BEST DATABASE DEVELOPMENT SOLUTION category for Navicat for MySQL</a>. In doing so, Navicat beat Quest Toad for Oracle and Devart dbForge Studio.</p><p>Navicat Premium also was a finalist for the BEST DBA SOLUTION award, along with Devart dbForge Studio. </p><h1 class="blog-sub-title">About Database Trends and Applications</h1><p>Database Trends and Applications is a magazine that covers data and information management, big data, and data science. In addition, their website, <a href="https://www.dbta.com" target="_blank">dbta.com</a>, provides white papers, webinars, and other offerings for learning in the field. DBTA also circulates newsletters that connect subscribers with news and analysis about a diverse range of subjects such as Oracle News, Linux News, MultiValue News, General Information Management News, and more.</p><h1 class="blog-sub-title">Where to Vote</h1><p>You can cast your vote for your favorite Navicat product directly on the <a href="https://www.dbta.com/Readers-Choice-Awards" target="_blank">DBTA website</a>. To make your selections:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Locate the desired category on the page, i.e., Best DBA Solution and select Navicat from the list of candidates:<p><img alt="navicat_in_best_dba_solution_drop-down (52K)" src="https://www.navicat.com/link/Blog/Image/2023/20230425/navicat_in_best_dba_solution_drop-down.jpg" height="604" width="441" /></p></li><li>Once you've made your selections for the categories which you are voting for, enter your name, company, and email at the bottom of the page in the Complete Your Vote section and click the Submit button:<p><img alt="complete_your_vote_form_fields (27K)" src="https://www.navicat.com/link/Blog/Image/2023/20230425/complete_your_vote_form_fields.jpg" height="511" width="463" /></p></li></ul><p>You should see the form replaced with a "Thanks for voting!" message confirming a successful form submission.</p><h1 class="blog-sub-title">Now it's Your Turn to Vote!</h1><p>Don't forget to <a href="https://www.dbta.com/Readers-Choice-Awards" target="_blank">cast your votes</a> by Wednesday May 10, 2023. Again, the categories and nominated Navicat products are:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Best DBA Solution (Navicat)</li><li>Best Data Modeling Solution (Navicat Data Modeler)</li><li>Best Database Development Solution (Navicat for MySQL)</li><li>Best Database Performance Solution (Navicat Monitor)</li></ul></body></html>]]></description>
</item>
<item>
<title>Selecting Distinct Values From a Relational Database</title>
<link>https://www.navicat.com/company/aboutus/blog/2229-selecting-distinct-values-from-a-relational-database.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Selecting Distinct Values From a Relational Database</title></head><body><b>Apr 14, 2023</b> by Robert Gravelle<br/><br/><p>A table column, such as one that stores first names, may contain many duplicate values.  If you're interested in listing the different (distinct) values, there needs to be a way to do so without resorting to complex SQL statements. In ANSI SQL compliant databases like PostgreSQL, SQL Server, and MySQL, the way to select only the distinct values from a column is to use the SQL DISTINCT clause. It removes duplicates from the result set of a SELECT statement, leaving only unique values. In this blog article, we'll learn how to use it.</p><h1 class="blog-sub-title">Syntax and Behavior</h1><p>To use the SQL DISTINCT clause, all you need to do is insert the DISTINCT keyword between in SELECT and columns and/or expressions list like so:</p><pre>SELECT DISTINCT columns/expressionsFROM tables[WHERE conditions];</pre><p>You may include one or more columns and/or expressions in your statement, as the query uses the combination of values in all specified columns in the SELECT list to evaluate their uniqueness. Also, if you apply the DISTINCT clause to a column that has NULL values, the DISTINCT clause will keep only one NULL and eliminate the others. In other words, the DISTINCT clause treats all NULL values as the same value.</p><h1 class="blog-sub-title">One Column Example</h1><p>A common use case for a query is to list all of the cities and/or countries of an organization's customers or users. Here's a query in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a> against the <a class="default-links" href="https://www.mysqltutorial.org/mysql-sample-database.aspx" target="_blank">classicmodels sample database</a>: </p><img alt="city_query (107K)" src="https://www.navicat.com/link/Blog/Image/2023/20230414/city_query.jpg" height="693" width="550" /><p>As highlighted with the red outline, there are duplicate cities.</p><p>To get a list of unique cities, we can add the DISTINCT keyword to the SELECT statement:</p><img alt="city_query_distinct (49K)" src="https://www.navicat.com/link/Blog/Image/2023/20230414/city_query_distinct.jpg" height="633" width="383" /><p>We can utilize Navicat's code-completion feature to bring up the DISTINCT keyword. Navicat displays information in drop-down lists as you type your SQL statement in the editor, it assists you with statement completion and the available properties of database objects, for example databases, tables, fields, views etc with their appropriate icons:</p><img alt="autocomplete (30K)" src="https://www.navicat.com/link/Blog/Image/2023/20230414/autocomplete.jpg" height="214" width="496" /><h1 class="blog-sub-title">Multiple Column Example</h1><p>The DISTINCT keyword may also be applied to multiple columns. In that context, the query will only return rows where all of the selected columns are unique. First, let's add the country field to our last query:</p><img alt="city_country_query (70K)" src="https://www.navicat.com/link/Blog/Image/2023/20230414/city_country_query.jpg" height="657" width="386" /><p>Once again, we see duplicates, which makes sense because a duplicated city will likely reside in the same country.</p><p>Once again, adding the DISTINCT keyword will cause the query engine to look at the combination of values in both city and country columns to evaluate and remove the duplicates:</p><img alt="city_country_query_distinct (68K)" src="https://www.navicat.com/link/Blog/Image/2023/20230414/city_country_query_distinct.jpg" height="659" width="382" /><h1 class="blog-sub-title"> DISTINCT with Null Values</h1><p>As mentioned above, the DISTINCT clause treats all NULL values as the same value so that only one instance of NULL is included in the result set. We can test that out for ourselves by querying a column such as this on in the same customers table that we queried previously:</p><img alt="region_column (102K)" src="https://www.navicat.com/link/Blog/Image/2023/20230414/region_column.jpg" height="516" width="425" /><p>As predicted, adding the DISTINCT keyword removed all but one instance of NULLs :</p><img alt="region_query_distinct (42K)" src="https://www.navicat.com/link/Blog/Image/2023/20230414/region_query_distinct.jpg" height="644" width="383" /><h1 class="blog-sub-title">Final Thoughts on Selecting Distinct Values From a Relational Database</h1><p>In this blog article, we learned how to use the SQL DISTINCT clause, which removes duplicates from the result set of a SELECT statement, leaving only unique values. As we saw, it can work on one or more columns as well as NULL values. However, should you need apply an aggregate function on one or more columns, you should use the GROUP BY clause instead.</p></body></html>]]></description>
</item>
<item>
<title>A Quick Guide to Naming Conventions in SQL - Part 3</title>
<link>https://www.navicat.com/company/aboutus/blog/2226-a-quick-guide-to-naming-conventions-in-sql-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Quick Guide to Naming Conventions in SQL - Part 3</title></head><body><b>Apr 6, 2023</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Stored Procedures, Functions, and Views</h1><p>Welcome to the 3rd and final installment on SQL naming conventions. In <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2132-a-quick-guide-to-naming-conventions-in-sql-part-1.html" target="_blank">Part 1</a>, we covered the rules for naming tables, while <a class="default-links" href="http://navicat.com/en/company/aboutus/blog/2225-a-quick-guide-to-naming-conventions-in-sql-part-2.html" target="_blank">Part 2</a> explored conventions for column names. This installment will offer some guidelines for naming other database objects such as Stored Procedures, Functions, and Views.</p><h1 class="blog-sub-title">Stored procedures</h1><p>A stored procedure is a set of statement(s) that perform some defined actions. Typically, they contain statements that are used frequently. Stored procedures are similar to functions in programming in that they can accept parameters, and perform operations when we call them.</p><h3>General Format</h3><p>Most DBAs like to give their stored procedures a prefix that identifies it as such, followed by the action that the stored procedure takes and then the name representing the object or objects it will affect:</p><pre>[prefix]_[action]_[object]</pre><p>Actions that you may take with a stored procedure include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Insert</li><li>Delete</li><li>Update</li><li>Select</li><li>Get</li><li>Validate</li></ul><h3>Choosing a Prefix</h3><p>The most obvious prefix to use on a stored procedure is "sp_".  That being said, there's at least one good reason to avoid it as it's already used by SQL Server as a standard naming convention in the master database.  If you do not specify the database where the object is, SQL Server will first search the master database to see if the object exists there and then it will search the user database. Even if you don't host your database(s) on SQL Server, you should probably avoid using this as a naming convention, just in case you ever switch.</p><p>Instead, consider a prefix like "usp_" instead.</p><h3>Putting It All Together</h3><p>Here are a few examples of well-named stored procedures to formulate your own:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>usp_insert_person</li><li>usp_delete_person</li><li>usp_update_person</li><li>usp_select_person</li><li>usp_get_person</li><li>usp_validate_person</li></ul><h1 class="blog-sub-title">User-defined Functions</h1><p>Similar to built-in database functions, a user-defined function accepts only input parameters and contains a set of SQL statements that perform actions and return the result, which can be either a single value or a table. </p><p>The naming convention a for user-defined function is to have an "fn_" prefix, followed by its action. Hence, the syntax should be very similar to that of stored procedures:</p><pre>[prefix]_[action]_[object]</pre><p>Functions that return true or false may follow the rule of using "is" or "are" as the action (verb).</p><p>Some examples of function names would include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>fn_count_string_instances</li><li>fn_get_customer_balance</li><li>fn_is_inventory_in_stock</li><li>fn_get_column_type</li></ul><h1 class="blog-sub-title">Views</h1><p>A view is a "virtual table" in a database that is defined by a query.  A view can combine data from two or more table, using joins, and also just contain a subset of information.  This makes them convenient to abstract, or hide, complicated queries. </p><p>The naming conventions for a view should have a "v_" or "vw_" prefix, followed by the namespace, results. As such, the syntax should be:</p><pre>[prefix]_[result]</pre><p>Here are a few examples:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>vw_actor_full_name</li><li>vw_sales_by_store</li><li>v_staff_list</li><li>v_sales_by_product_category</li></ul><h1 class="blog-sub-title">Final Thoughts on Naming Conventions for Stored Procedures, Functions, and Views</h1><p>In this three part series, we explored some commonly used naming conventions and considered how best to formulate our own. Part 1 covered Table names, while Part 2 focused on column names.  Finally, Part 3 addressed Naming Conventions for other database objects such as Procedures, Functions, and Views.</p><p>Remember that you need not apply rules to all database objects. You could choose to apply naming convention rules to tables and column names only. It's really your decision, as using a naming convention is not mandatory, but beneficial nonetheless.</p></body></html>]]></description>
</item>
<item>
<title>A Quick Guide to Naming Conventions in SQL - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/2225-a-quick-guide-to-naming-conventions-in-sql-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Quick Guide to Naming Conventions in SQL - Part 2</title></head><body><b>Mar 31, 2023</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Column Names</h1><p>Welcome to the 2nd installment on SQL naming conventions. As mentioned in <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2132-a-quick-guide-to-naming-conventions-in-sql-part-1.html" target="_blank">part 1</a>, naming conventions are a set of rules (written or unwritten) that should be utilized in order to increase the readability of the data model. These may be applied to just about anything inside the database, including tables, columns, primary and foreign keys, stored procedures, functions, views, etc. Having covered the rules for naming tables in part 1, we'll be looking at column names in this installment.  Other database objects such as Procedures, Functions, and Views will be explored in part 3.</p><h1 class="blog-sub-title">The Primary Key Column</h1><p>The primary key is a field or a combination of fields in a table that uniquely identify the records in the table. A table can have only one primary key. As such, many DBAs prefer to simply name this column "id".  Other's append the "_id" suffix to the table name, as seen here in the Sakila Sample Database:</p><img alt="actor_id_column (50K)" src="https://www.navicat.com/link/Blog/Image/2023/20230331/actor_id_column.jpg" height="188" width="657" /><p>Likewise, you should also assign your PK constraint a meaningful name.  The naming convention for a primary key constraint is that it should have a "pk_" prefix, followed by the table name, i.e. "pk_&lt;table_name&gt;".</p><h1 class="blog-sub-title">Foreign Key Columns</h1><p>A foreign key is a field in the table that references a primary key in other tables. A good rule to follow is to use the referenced table name and "_id", e.g. customer_id, employee_id. This will help us identify the field as a foreign key column and also point us to the referenced table.</p><p>Here's a city table that contains a foreign key to the country table's country_id field:</p><img alt="country_id_foreign_key (50K)" src="https://www.navicat.com/link/Blog/Image/2023/20230331/country_id_foreign_key.jpg" height="190" width="659" /><p>The naming convention for a foreign key constraint is to have an "fk_" prefix, followed by the target table name, followed by the source table name. Hence, the syntax should be "fk_&lt;target_table&gt;_&lt;source_table&gt;".</p><p>Following the foreign key constraint naming convention for the city table would give us the name "fk_city_country":</p><img alt="fk_city_country_foreign_key (36K)" src="https://www.navicat.com/link/Blog/Image/2023/20230331/fk_city_country_foreign_key.jpg" height="131" width="676" /><h1 class="blog-sub-title">Data Columns</h1><p>In the section on Describing Real-World Entities in Part 1, it states:</p><blockquote>Any time that you're naming entities that represent real-world things, you should use their proper nouns. These would apply to tables like employee, customer, city, country, etc. Usually, a single word should exactly describes what is in that table. </blockquote><p>The same rules can and should be applied to data columns. Again, you should use the least possible words to describe what is stored in that column, e.g., country_name, country_code, customer_name. If two tables will have columns with the same name, you could add something to keep the name unique, although that's not strictly necessary as table prefixing will differentiate the columns in queries. Nonetheless, having unique names for each column is helpful because it reduces the chance to later mix these two columns while writing queries. Names like customer_name city_name are likely to come up in more than one table. If that concerns you, you can always make the names more descriptive, such as order_customer_name or city_of_residence_name.</p><h1 class="blog-sub-title">Dates</h1><p>For dates, it's good practice to describe what the date represents. Names like start_date and end_date are pretty common and generic. You can describe them more precisely by using names like call_start_date and call_end_date.</p><h1 class="blog-sub-title">Final Thoughts on Naming Conventions for Column Names</h1><p>You probably noticed from all of the examples presented that both table and column names should be in lowercase with  words separated by an underscore ("_"). For example, customer_name and invoice_date as opposed to customerName and invoiceDate. This works well with the SQL style convention of capitalizing statements names, clauses, and other keywords for better code readability, e.g. <code>SELECT customer_name, invoice_date FROM orders;</code></p></body></html>]]></description>
</item>
<item>
<title>Viewing PostgreSQL Instance Details in Navicat Monitor 3</title>
<link>https://www.navicat.com/company/aboutus/blog/2224-viewing-postgresql-instance-details-in-navicat-monitor-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Viewing PostgreSQL Instance Details in Navicat Monitor 3</title></head><body><b>Mar 23, 2023</b> by Robert Gravelle<br/><br/><p>Navicat Monitor 3 added support for PostgreSQL, one of the most popular modern relational databases in use today. New features include an SQL Profiler for PostgreSQL instances as well as enhanced Query Analyzer and Long Running Queries pages, both of which were touched upon in the <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/2215-monitoring-postgresql-with-navicat-monitor-3-0.html" target="_blank">Monitoring PostgreSQL with Navicat Monitor 3.0</a> blog article. Today's topic will be the Instance Details page.</p><h1 class="blog-sub-title">Instance Details Page at a Glance</h1><p>The Instance Details page is accessible by clicking on an instance card in the Overview page.  It shows server parameters and metrics in a visual way, providing you with a quick view of the server load and performance. There, you can view:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>the top 5 databases based on size</li><li>the top 5 tables based on size</li><li>system charts on CPU, Memory, Swap, and DB Disk Usage</li><li>metrics on connections, queries, tables, buffer, cache and sort, as well as locks</li></ul><img alt="instance_details_page (111K)" src="https://www.navicat.com/link/Blog/Image/2023/20230323/instance_details_page.jpg" height="726" width="1238" /><h3>Information on the Instance Details Page</h3><p>The Instance Details page is split up into the following three sections:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Summary</li><li>Databases &amp; Tables</li><li>Charts</li></ul><p>Here's a breakdown of each section:</p>  <h4><b>Summary</b></h4><p>The summary displays host information about the server, server properties, alerts and status. There, you can view or edit the instance variables as well as click on raised alerts to open the alert page.</p>  <h4><b>Databases &amp; Tables</b></h4><p>This section displays the top five databases and tables by size, as well as a sixth category called "Others" that groups the remaining databases or tables. You can hover over each segment to show the size percentage. To view size information of all databases and tables in the instance, click the View All button.</p><img alt="top_5_databases_and_tables_section (39K)" src="https://www.navicat.com/link/Blog/Image/2023/20230323/top_5_databases_and_tables_section.jpg" height="235" width="973" />  <h4><b>Charts</b></h4><p>Navicat Monitor displays visualizations of server performance metrics as small charts. The charts track and refresh the data at certain intervals, which are configurable by the user. Related metrics are displayed using different predefined colors and symbols. Note that, due to size constraints, the axis scales and labels of the small charts are not printed.</p><p>Both the time interval (X-axis) and refresh options are configurable via the AUTO REFRESH drop-down menu, the START FROM datetime picker, the time INTERVAL drop-down menu and the panning arrows:</p><img alt="time_interval_and_refresh_options (17K)" src="https://www.navicat.com/link/Blog/Image/2023/20230323/time_interval_and_refresh_options.jpg" height="220" width="442" /><p>Hovering the mouse pointer over a chart will display the values at that point:</p><img alt="overviewInstanceChart (15K)" src="https://www.navicat.com/link/Blog/Image/2023/20230323/overviewInstanceChart.png" height="154" width="294" /><p>You can also click on a chart to open the Chart page in order to view the details of an individual chart or to see more charts.</p><h1 class="blog-sub-title">Pausing (and Resuming) Monitoring</h1><p>You can pause and resume monitoring the instance here by using the Pause Monitoring and Resume Monitoring buttons, located in upper-right quadrant of the Instance Details page:</p><img alt="pause_and_resume_monitoring_buttons (13K)" src="https://www.navicat.com/link/Blog/Image/2023/20230323/pause_and_resume_monitoring_buttons.jpg" height="130" width="518" /><p>Navicat Monitor stops collecting information from the server until the monitoring resumes and adds the "[Paused]" indicator beside the names of paused instances in the Instances Pane (on the left):</p><img alt="paused_instance (17K)" src="https://www.navicat.com/link/Blog/Image/2023/20230323/paused_instance.jpg" height="238" width="256" /><h1 class="blog-sub-title">Final Thoughts on Viewing PostgreSQL Instance Details in Navicat Monitor 3</h1><p>Today's covered <a class="default-links" href="https://navicat.com/en/discover-navicat-monitor" target="_blank">Navicat Monitor 3</a>'s Instance Details screen. It shows the server parameters and metrics in a highly visual way, gives you a quick view of the server load and performance, and more!  </p></body></html>]]></description>
</item>
<item>
<title>Trace Queries on your PostgreSQL Instances with Navicat Monitor 3</title>
<link>https://www.navicat.com/company/aboutus/blog/2223-trace-queries-on-your-postgresql-instances-with-navicat-monitor-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Trace Queries on your PostgreSQL Instances with Navicat Monitor 3</title></head><body><b>Mar 16, 2023</b> by Robert Gravelle<br/><br/><p>Navicat Monitor 3 comes packed with a variety of exciting new features. Case in point, you can now create traces that collect query data based on selected filters from the server log. When creating a trace, you can define criteria to filter the data collected by SQL Profiler and set a schedule for executing the trace. In today's blog we'll learn how to create a trace and view its results.</p><h1 class="blog-sub-title">More about Traces</h1><p>The Tracing feature is part of the SQL Profiler, which is only available for PostgreSQL.  The SQL Profiler provides graphical query execution details for locating inefficient and slow queries.</p><p>The data collected from traces may be analyzed and used to troubleshoot performance issues. For example, you can see which queries are affecting performances in the production environment.</p><h1 class="blog-sub-title">Creating a Trace</h1><p>You can create new traces on the SQL Profiler, Query Analyzer, and Long Running Queries pages by clicking the Add Trace icon <img alt="icon_addTrace (4K)" src="https://www.navicat.com/link/Blog/Image/2023/20230316/icon_addTrace.png" height="20" width="20" /> or + New Trace.</p> <figure>  <figcaption>The Add Trace icon on the Long Running Queries page</figcaption>  <img alt="new_trace_icon_on_long_running_queries_page (40K)" src="https://www.navicat.com/link/Blog/Image/2023/20230316/new_trace_icon_on_long_running_queries_page.jpg" height="237" width="630" /></figure> <p>On the SQL Profiler page you'll have to select the instance before clicking the + New Trace button:</p><img alt="selected_postgreSQL_instance (19K)" src="https://www.navicat.com/link/Blog/Image/2023/20230316/selected_postgreSQL_instance.jpg" height="216" width="331" /><p>Clicking the Add Trace icon <img alt="icon_addTrace (4K)" src="https://www.navicat.com/link/Blog/Image/2023/20230316/icon_addTrace.png" height="20" width="20" /> or + New Trace button will bring up the New Trace dialog. A prompt may pop up asking you to authorize Navicat Monitor to get relevant data from your instance. </p><p>Here are all of the details that you can enter on the New Trace dialog:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>TRACE NAME: the name of the trace.</li><li>USER FILTER: the users/roles whose queries to include in the trace. Empty means including queries from all users/roles.</li><li>DATABASE FILTER: the databases to trace. Empty means including queries against all databases.</li><li>QUERY FILTER: search strings or QueryIDs to filter queries for the trace.</li><li>MAX TRACE ROW COUNT: the maximum number of rows for the trace. SQL Profiler will terminate the trace when it reaches the row count.</li><li>SCHEDULE: scheduling details for executing the trace.</li><li>Share with: who can see the trace.</li></ul><p>Here's the New Trace dialog with some of the above fields filled in:</p><img alt="new_trace_dialog (56K)" src="https://www.navicat.com/link/Blog/Image/2023/20230316/new_trace_dialog.jpg" height="654" width="852" /><p>Clicking on Create Trace that starts tracing according to the provided schedule. You should then see results after the first time period has elapsed.</p><h1 class="blog-sub-title">Viewing Trace Results</h1><p>A trace provides a graphical representation of the execution plan for each query with statistics for its components.  Here is the trace for a query against the Sakila Sample Database:</p><img alt="trace_results (179K)" src="https://www.navicat.com/link/Blog/Image/2023/20230316/trace_results.jpg" height="914" width="1165" /><p>You can see from the above screen capture that the Trace Results are divided into 3 sections:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Query Table: The query table shows the basic information for the queries. Select a query to show its details and plans.</li><li>Query Details: shows the complete statement of the query.</li><li>Execution Plan: the execution plan that is generated for each query can be viewed in three different formats: Visual, Charts and Text-Based.</li></ul><h1 class="blog-sub-title">Final Thoughts on Tracing Queries on your PostgreSQL Instances with Navicat Monitor 3</h1><p>In today's blog we saw how easy it is to create a trace and view its results in Navicat Monitor 3. Only available for PostgreSQL, traces collect query data based on selected filters from the server log. I think that you'll find them to be indispensable for locating inefficient and slow queries.</p></body></html>]]></description>
</item>
<item>
<title>Monitoring PostgreSQL with Navicat Monitor 3.0</title>
<link>https://www.navicat.com/company/aboutus/blog/2215-monitoring-postgresql-with-navicat-monitor-3-0.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Monitoring PostgreSQL with Navicat Monitor 3.0 </title></head><body><b>Mar 10, 2023</b> by Robert Gravelle<br/><br/><p>Version 3 of Navicat Monitor has just be released. Unsurprisingly, it packs many outstanding new features, as well as numerous improvements to existing features. One of the most noteworthy changes between version 2 and 3 is added support for PostgreSQL, including an SQL Profiler for PostgreSQL instances.</p><p>Today's blog will provide a quick guide on getting setup to monitor your be PostgreSQL instances using Navicat Monitor 3.0.</p><h1 class="blog-sub-title">Adding a PostgreSQL Instance</h1><p>You can see all of the monitored database instances on the Overview screen. In order to monitor our PostgreSQL instance, we need to add it to this screen.  To do that, we simply need to click the "+New Instance" button at the top of the screen.  Doing so presents a context list of available database types - both traditional and cloud-based:</p><img alt="new_instance_button (34K)" src="https://www.navicat.com/link/Blog/Image/2023/20230310/new_instance_button.jpg" height="259" width="454" /><p>Select the PostgreSQL item to open the New PostgreSQL Instance dialog:</p><img alt="new_postgresql_instance_dialog (68K)" src="https://www.navicat.com/link/Blog/Image/2023/20230310/new_postgresql_instance_dialog.jpg" height="833" width="664" /><p>Navicat Monitor can connect the database server over a secure SSH tunnel to send and receive monitoring data. It allows you to connect your servers even if remote connections are disabled or are blocked by firewalls.</p><p>In the PostgreSQL Server section, enter the following information:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Host Name: The host name or IP address of the database server.</li><li>Port: The TCP/IP port for connecting to the database server.</li><li>Username: A monitoring user for connecting to the database server.</li><li>Password: The login password of the monitoring user.</li><li>Server Type: The type of the server. Can be Unix-like or Windows.</li></ul><p>Navicat Monitor can also collect the DB server's system performance metrics such as CPU and memory resources. If you do not provide this login, you can still monitor your server, but system performance metrics will not be shown.</p><p>Once you've entered all of the above information, click the "New" button to create the new instance.</p><h1 class="blog-sub-title">Activating a Token </h1><p>Now that we've added our PostgreSQL instance, we're ready to activate it. To do that, we'll need to assign a token to it via Configurations > Activate Tolens &amp; License Instances.</p><img alt="activate_tokens_button (122K)" src="https://www.navicat.com/link/Blog/Image/2023/20230310/activate_tokens_button.jpg" height="984" width="1280" /><p>To activate the instance, we can locate it in the Unlicensed Instances list, check the box beside it, and click the License button to move it into the Licensed Instances list.  Here's our "PostgreSQL Test DB 1" instance in the Licensed list:</p><img alt="activated_pstgresql_instance (35K)" src="https://www.navicat.com/link/Blog/Image/2023/20230310/activated_pstgresql_instance.jpg" height="293" width="754" /><p>We can now receive server statistic details about our instance's performance regarding query execution as well as  server load, availability, disk usage, network I/O, table locks and more. By easily tracking the deviations and traffic among servers, we can examine possible solutions and adjust our server settings accordingly.</p><h1 class="blog-sub-title">Monitoring Query Performance</h1><p>The Query Analyzer tool provides a graphical representation of the query logs that makes interpreting their contents much easier. In addition, the Query Analyzer tool enables us to monitor and optimize query performance, visualize query activity statistics, analyze SQL statements, as well as quickly identify and resolve long running queries. Here's the Query Analyzer for our new instance:</p><img alt="query_analyzer (170K)" src="https://www.navicat.com/link/Blog/Image/2023/20230310/query_analyzer.jpg" height="977" width="1066" /><p>There are no Long Running Queries at this time because the database is new and not currently in use.</p><h1 class="blog-sub-title">Final Thoughts on Monitoring PostgreSQL with Navicat Monitor 3.0</h1><p>Thanks to Navicat Monitor 3.0, we can now monitor our PostgreSQL instances, via many useful tools, including the Enhanced Query Analyzer and Long Running Queries screens. </p><p>Navicat Monitor 3.0 is available fore Windows, macOS (using Homebrew), and Linux. You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">try Navicat Monitor 3.0 for 14 days</a> free of charge to sample of all its new features before you buy.</p></body></html>]]></description>
</item>
<item>
<title>Navicat Monitor 3.0 is Here!</title>
<link>https://www.navicat.com/company/aboutus/blog/2181-navicat-monitor-3-0-is-here.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat Monitor 3.0 is Here!</title></head><body><b>Mar 3, 2023</b> by Robert Gravelle<br/><br/><p>It seems like only yesterday that <a class="default-links" href="https://navicat.com/en/company/press/1085-navicat-monitor-version-2-0-is-released%20.html" target="_blank">Navicat Monitor 2.0 was released</a>, adding great new features to an already stellar product.  Now, version 3.0 is introducing yet more outstanding features, including:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Support monitoring PostgreSQL instances.</li><li>Support SQL Profiler for PostgreSQL instances.</li><li>Enhanced Query Analyzer.</li><li>Enhanced Long Running Queries.</li><li>Many other new features and improvements.</li></ul><p>Of course, all of Navicat Monitor's existing functionality remains in place, including:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Agentless Architecture</li><li>Real-time Performance Monitoring</li><li>See how your instances are currently functioning easily</li><li>Advanced root cause analysis</li><li>Set custom alert thresholds</li><li>Get notifications via email, SMS or SNMP</li><li>Replication Monitoring</li><li>Powerful Query Analyzer</li><li>and more...</li></ul><p>In today's blog, we'll be taking a look at the brand new Navicat Monitor 3.0 with the emphasis being on the new features listed above.</p><h1 class="blog-sub-title">About Navicat Monitor</h1><p>Navicat Monitor is a safe, simple and agentless remote server monitoring tool that includes many powerful features to make your monitoring effective as possible. You can access Navicat from anywhere via a web browser to access statistics on server load and performance regarding its availability, disk usage, network I/O, table locks and more. Using this valuable data, you can easily examine possible solutions, tune the databases, and address potential issues before they can become serious problems or costly outages.</p><p>Navicat Monitor is a server-based software which can be accessed from anywhere via a web browser. With web access, you can easily and seamlessly keep track of your servers around the world, around the clock.</p><h1 class="blog-sub-title">Support for PostgreSQL Instances</h1><p>Just as version 2 added support for SQL Server, version 3 can monitor PostgreSQL as well.  Here are all supported DB types in the New Instance menu:</p><img alt="new_instance_menu (25K)" src="https://www.navicat.com/link/Blog/Image/2023/20230303/new_instance_menu.jpg" height="379" width="358" /><p>Selecting PostgreSQL from the list brings up the New PostgreSQL Instance dialog, which includes all of the configuration options that you might need for a PostgreSQL instance: </p><img alt="new_postgresql_instance_dialog (68K)" src="https://www.navicat.com/link/Blog/Image/2023/20230303/new_postgresql_instance_dialog.jpg" height="833" width="664" /><p>You'll be happy to know that the SQL Profiler also works for PostgreSQL instances. It's a tool that provides graphical query execution details for locating inefficient and slow queries. It supports the creation of traces to collect data about the queries executed on an instance. The data can later be analyzed and used to troubleshoot performance issues.</p><img alt="postgresql_trace (167K)" src="https://www.navicat.com/link/Blog/Image/2023/20230303/postgresql_trace.jpg" height="783" width="1185" /><h1 class="blog-sub-title">Enhanced Query Analyzer and Long Running Queries</h1><p>The Query Analyzer tool provides a graphical representation of the query logs that makes interpreting their contents much easier. In addition, the Query Analyzer tool enables you to monitor and optimize query performance, visualize query activity statistics, analyze SQL statements, as well as quickly identify and resolve long running queries. In version 3, the Long Running Queries chart has been moved to the top of the page for easier identification:</p><img alt="query_analyzer (129K)" src="https://www.navicat.com/link/Blog/Image/2023/20230303/query_analyzer.jpg" height="894" width="965" /><p>Moreover, the entire Long Running Queries section provides the ability to drill down to specific intervals:</p><img alt="long_running_queries (83K)" src="https://www.navicat.com/link/Blog/Image/2023/20230303/long_running_queries.jpg" height="608" width="982" /><p>Information about Long Running Queries may be exported as a PDF or scheduled report.</p><h1 class="blog-sub-title">Final Thoughts on Navicat Monitor 3.0</h1><p>In this blog, we explored just a few of the exciting new features of Navicat Monitor 3.0, including Support for PostgreSQL Instances, as well as Enhanced Query Analyzer and Long Running Queries. There are plenty more new features that we'll be looking at over the next several weeks. </p><p>Navicat Monitor 3.0 is available for Windows, macOS (using Homebrew), and Linux. You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">try Navicat Monitor 3.0 for 14 days</a> free of charge to sample of all its new features before you buy.</p></body></html>]]></description>
</item>
<item>
<title>A Quick Guide to Naming Conventions in SQL - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/2132-a-quick-guide-to-naming-conventions-in-sql-part-1.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Quick Guide to Naming Conventions in SQL - Part 1</title></head><body><b>Feb 15, 2023</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Table Names</h1><p>Naming conventions are a set of rules (written or unwritten) that should be utilized in order to increase the readability of the data model. You may apply these rules when naming anything inside the database, including tables, columns, primary and foreign keys, stored procedures, functions, views, etc. You need not apply rules to all database objects. For instance, it would be perfectly fine to limit naming convention rules to tables and column names. It's really your decision, as using a naming convention is not mandatory, but beneficial nonetheless. This three part series will present some commonly used naming conventions and provide some tips for formulating your own. Part 1 will cover Table names, while Part 2 will focus on column names.  Finally, Part 3 will address Naming Conventions for other database objects such as Foreign Keys, Procedures, Functions, and Views.</p><h1 class="blog-sub-title">Why You Should Use a Naming Convention</h1><p>Databases rarely have a small number of tables. In fact, it's not at all uncommon to have hundreds of tables. By following a naming convention, you'll make your life a lot easier by increasing the overall model readability, and making it easier to locate database (DB) objects.</p><p>Another good reason is that the database will slowly evolve over time. Although changes to the schema are usually avoided and done only when necessary, changing the name of a database object could affect your application code in a myriad of ways. Since you can expect the database will remain, more or less, very similar to its initial incarnation, if you apply best practices from the start and continue using them as you add new objects, you'll keep your database structure well organized over time.</p><h1 class="blog-sub-title">Singular vs. Plural Table Names</h1><p>One of the most commonly asked questions regarding the naming of tables is whether to use the singular or plural form. There are many differing opinions on this matter. In fact, we can see both views expressed in the schemas of the MySQL classicmodels and sakila sample databases, with the former employing plural table names, and the latter utilizing singular naming:</p><img alt="classicmodels_and_sakila_table_names (62K)" src="https://www.navicat.com/link/Blog/Image/2023/20230215/classicmodels_and_sakila_table_names.jpg" height="801" width="316" /><p>If it helps, most DBAs go with singular names. One reason is that plural names like "users" "roles" could lead to some weird table names down the road such "users_have_roles" rather than "user_has_role".</p><h1 class="blog-sub-title">Describing Real-World Entities</h1><p>Any time that you're naming entities that represent real-world things, you should use their proper nouns. These would apply to tables like employee, customer, city, country, etc. Usually, a single word should exactly describes what is in that table. </p><p>There are times that you'll have to use more than one word to describe what is in a table. One such example can be seen in the classicmodels database. There is one table for "orders" and another for "order_details": </p><figure>  <figcaption>orders Table</figcaption>  <img alt="orders_table (250K)" src="https://www.navicat.com/link/Blog/Image/2023/20230215/orders_table.jpg" height="558" width="694" /></figure><figure>  <figcaption>order_details Table</figcaption>  <img alt="order_details_table (250K)" src="https://www.navicat.com/link/Blog/Image/2023/20230215/order_details_table.jpg" height="561" width="498" /></figure><p>The "orders" table contains fields such as the Customer ID, Employee ID, Order Date, Shipped Date, Freight, and Shipping Address.  Meanwhile, the "order_details" contains data about the products ordered, such as the Quantity ordered and Price. The field could have been named "product_details" but that would not convey that the product was associated with an order.</p><h1 class="blog-sub-title">Naming Related Tables</h1><p>For relations between two tables, it's standard practice to use both tables' names. A verb may also be added between both names to describe what that action is, for example "user_has_role", or simply "user_role". The Sakila Sample Database follows this convention by joining related tables with an intermediary table that combines both names. We can observe two examples in the database model below - "film_actor" and "film_category": </p><img alt="sakila_model (141K)" src="https://www.navicat.com/link/Blog/Image/2023/20230215/sakila_model.jpg" height="602" width="755" /><h1 class="blog-sub-title">Final Thoughts on Table Naming Conventions in SQL</h1><p>Don't be afraid to stray from a naming convention if it doesn't make logical sense in a given situation. For example, if we had a product and invoice table, and we wanted to specify which products were on which invoice, the name "invoice_item" might make more sense than either "invoice_product" or "invoice_contains_product". </p></body></html>]]></description>
</item>
<item>
<title>Supercharging Your Queries with Navicat and ChatGPT</title>
<link>https://www.navicat.com/company/aboutus/blog/2131-supercharging-your-queries-with-navicat-and-chatgpt.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Supercharging Your Queries with Navicat and ChatGPT</title></head><body><b>Feb 9, 2023</b> by Robert Gravelle<br/><br/><p>It's official; the age of Artificial Intelligence (AI) has arrived! Until our new overlords decide to use us to power their machines, let's take the time to fully enjoy all the benefits they provide and the myriad of ways that they make our lives easier. Case in point, the AI-driven chatbot, <a class="default-links" href="https://chatgptonline.net/" target="_blank">ChatGPT</a>, by OpenAI, has been lauded for its ability to produce tremendously spot-on answers to questions across a broad range of topics. And, although ChatGPT may not be making our jobs obsolete just yet, it has proven to be amazingly adept at working with data sets, much like a DBMS. In today's blog, we'll explore how ChatGPT could be utilized to supplement a professional database development and administration tool like Navicat.</p><h1 class="blog-sub-title">Creating the Data Set</h1><p>ChatGPT is able to model a formal dataset from a list of delimited values. All you need to do is tell it what to do using regular, conversational language. ChatGPT is also able to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. We can see an example on the <a class="default-links" href="https://blog.ouseful.info/" target="_blank">OUseful.Info blog</a> that created a table named "racerresults". Here are the instructions given to ChatGPT, along with a sampling of the input data:</p><pre>Treat the following as a tab separated dataset. Using just the first, third and fourth columns, treat the data as if it were a relational SQL database table called "racerresults" with columns "Race", "Driver" and "Team", and the "Race" column as a primary key column. Display a SQL statement that could create the corresponding table and populate it with the data.Bahrain20 Mar 2022Charles LeclercFERRARI571:37:33.584Saudi Arabia27 Mar 2022Max VerstappenRED BULL RACING RBPT501:24:19.293Australia10 Apr 2022Charles LeclercFERRARI581:27:46.548Emilia Romagna24 Apr 2022Max VerstappenRED BULL RACING RBPT631:32:07.986Miami08 May 2022Max VerstappenRED BULL RACING RBPT571:34:24.258Spain22 May 2022Max VerstappenRED BULL RACING RBPT661:37:20.475Monaco29 May 2022Sergio PerezRED BULL RACING RBPT641:56:30.265erc...</pre><p>From the above instructions and data, ChatGPT generated the following CREATE TABLE and INSERT statements:</p><img alt="raceresults_create_and_insert_statements (122K)" src="https://www.navicat.com/link/Blog/Image/2023/20230209/raceresults_create_and_insert_statements.jpg" height="639" width="829" /><p>With the data in place, we're ready to run queries against it.</p><h1 class="blog-sub-title">Querying a Data Set with ChatGPT</h1><p>In terms of query formulation, ChatGPT shares some similarities with Navicat, in that both allow you to construct queries with little knowledge of SQL. To do that, Navicat features the Query Builder tool. Here it is in macOS:</p><img alt="queryBuilder (136K)" src="https://www.navicat.com/link/Blog/Image/2023/20230209/queryBuilder.png" height="611" width="841" /><p>As for ChatGPT, it takes a question phrased in regular, conversational language, and produces the required SQL statement(s).</p><p>This ability to translate natural prompts into structured outputs has made it appealing not only to database professionals but also to teams across a variety of industries. As per G2's guide on <a class="default-links" href="https://learn.g2.com/github-copilot-vs-chatgpt" target="_blank">GitHub Copilot vs ChatGPT for coding</a>, ChatGPT dominates computer software, IT services, marketing, financial services, and education management. This industry-wide adoption underscores its versatility, showing that its value extends far beyond fun use cases like data experiments or emoji assignments.</p><p>For instance, given the following list of historical figures:</p><img alt="historical_figures (56K)" src="https://www.navicat.com/link/Blog/Image/2023/20230209/historical_figures.jpg" height="653" width="539" /><p>We can simply as ChatGPT how it would query for the oldest historical figure. Here is the resulting SQL statement and explanation offered by ChatGPT:</p><img alt="oldest_historical_figure_query (166K)" src="https://www.navicat.com/link/Blog/Image/2023/20230209/oldest_historical_figure_query.jpg" height="915" width="1543" /><h1 class="blog-sub-title">Fun with Data</h1><p>ChatGPT can do a lot more than generate queries; it can also think creatively to assign emojis to each historical figure:</p><img alt="historical_figures_with_emojis (131K)" src="https://www.navicat.com/link/Blog/Image/2023/20230209/historical_figures_with_emojis.jpg" height="1003" width="1367" /><h1 class="blog-sub-title">Final Thoughts on Supercharging Your Queries with Navicat and ChatGPT</h1><p>While AI bots like ChatGPT are a long way from replacing traditional database tools, they do offer another tool to database practitioners who are looking for new and innovative ways of approaching data-related tasks. At the time of this writing, ChatGPT was at capacity and unable to accept new users, but once things die down a bit, I would urge you to give ChatGPT a try.</p></body></html>]]></description>
</item>
<item>
<title>Correlated Subqueries</title>
<link>https://www.navicat.com/company/aboutus/blog/2104-correlated-subqueries.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Correlated Subqueries</title></head><body><b>Feb 2, 2023</b> by Robert Gravelle<br/><br/><p>Subqueries can be categorized into two types:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>A non-correlated (simple) subquery obtains its results independently of its containing (outer) statement.</li><li>A correlated subquery references values from its outer query in order to execute.</li></ul><p>When a non-correlated subquery executes (independently of the outer query), the subquery executes first, and then passes its results to the outer query.  Meanwhile, a correlated subquery typically obtains values from its outer query before it executes. When the subquery returns, it passes its results to the outer query. </p><p>Now that we know the difference between a correlated subquery and its non-correlated counterpart, this blog will cover how to write a correlated subquery in <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat Premium 16</a>.</p><h1 class="blog-sub-title">Syntax and Usage</h1><p>A correlated subquery is evaluated once for each row processed by the parent statement. The parent statement can be a SELECT, UPDATE, or DELETE statement. Here's the syntax for a SELECT query:</p><pre>SELECT column1, column2, ....FROM table1 outerWHERE column1 operator  (SELECT column1, column2   FROM table2   WHERE expr1 = outer.expr2);</pre>                               <p>A correlated subquery is one way of reading every row in a table and comparing values in each row against related data. It is used whenever a subquery must return a different result or set of results for each candidate row considered by the main query. In other words, you can use a correlated subquery to answer a multipart question whose answer depends on the value in each row processed by the parent statement.</p><h1 class="blog-sub-title">A Practical Example</h1><p>Here's a rather ingenious query from stackoverflow against the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/sakila-structure.html" target="_blank">Sakila sample database</a> that fetches the most viewed film per country. </p><p>The first step is to count how many times each film was viewed in each country.  Here is the SELECT statement for that:</p><pre>SELECT   F.title AS title,   CO.country_id AS country_id,  CO.country AS country_name,   count(F.film_id) as timesFROM customer C INNER JOIN address A ON C.address_id = A.address_idINNER JOIN city CI ON A.city_id = CI.city_idINNER JOIN country CO ON CI.country_id = CO.country_idINNER JOIN rental R ON C.customer_id = R.customer_idINNER JOIN inventory I ON R.inventory_id = I.inventory_idINNER JOIN film F ON I.film_id = F.film_idGROUP BY F.film_id, CO.country_id;</pre><p>And here is the above query and results in <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat Premium 16</a>:</p><img alt="most viewed film per country inner query (170K)" src="https://www.navicat.com/link/Blog/Image/2023/20230202/most%20viewed%20film%20per%20country%20inner%20query.jpg" height="823" width="579" /><p>The next step is to convert the above results into a list of countries, along with the most viewed film title and the number of times it was viewed. Here's the full query with correlated subquery with an explanation to follow:</p><img alt="most viewed film per country correlated query (159K)" src="https://www.navicat.com/link/Blog/Image/2023/20230202/most%20viewed%20film%20per%20country%20correlated%20query.jpg" height="829" width="555" /><p>Explanation:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Subquery: Fetches a list of movie count, grouped by country.</li><li>GROUP_CONCAT(title ORDER BY times DESC SEPARATOR '|||') returns ALL titles in that 'row', with the most-viewed title first. The separator doesn't matter, as long as never occurs in a title.</li><li>SUBSTRING_INDEX('...', '|||', 1) extracts the first part of the string until it finds "|||", in this case the first (and thus most-viewed) title.</li></ul><h1 class="blog-sub-title">Final Thoughts on Correlated Subqueries</h1><p>In today's blog we learned how to write a correlated subquery using <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat Premium 16</a>. Be forewarned that correlated subqueries can be slow.  However, with proper optimizing, their speed can be increased significantly.</p></body></html>]]></description>
</item>
<item>
<title>How to Perform a Search and Replace in SQL</title>
<link>https://www.navicat.com/company/aboutus/blog/2095-how-to-perform-a-search-and-replace-in-sql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>How to Perform a Search and Replace in SQL</title></head><body><b>Jan 18, 2023</b> by Robert Gravelle<br/><br/><p>As you are no doubt aware, updating text values in the database is a commonplace occurrence.  Nonetheless, it is a rare database administrator (DBA) that doesn't feel some trepidation upon executing batch updates against production tables. In today's blog, we'll learn how to use the SQL REPLACE() function to replace either a complete or partial string in a table column.</p><h1 class="blog-sub-title">A Typical Scenario</h1><p>Here's a screenshot of the products table from the classicmodels sample database:</p><img alt="products_table (114K)" src="https://www.navicat.com/link/Blog/Image/2023/20230118/products_table.jpg" height="548" width="575" /><p>Suppose that the makers of Chef Anton products have decided to enclose their products in quotation marks (""). This would require a total of 4 steps:</p><ol><li>Employ the LIKE operator to identify rows with Chef Anton products.</li><li>Parse out the product name.</li><li>Add the enclosing quotation marks.</li><li>Convert the SELECT QUERY to an UPDATE.</li></ol><p>Let's go over each step.</p><h3>Identify Rows with Chef Anton Products</h3><p>As mentioned above, we can utilize the LIKE operator to identify rows with Chef Anton products.  Each of these begins with the string "Chef Anton's ", so we can search for it.  To do that, we will need to escape the single quote (') character and include the multi-character "%" wildcard.  Here is the resulting query and results in <a href="https://www.navicat.com/en/download/navicat-premium">Navicat Premium 16</a>:</p><img alt="like_query (49K)" src="https://www.navicat.com/link/Blog/Image/2023/20230118/like_query.jpg" height="352" width="403" /><h3>Parse Out the Product Name</h3><p>The next step is to parse out the product name so that we can enclose it within quotation marks. To do that, we can employ the LEN() function to calculate the number of characters after the "Chef Anton's " portion of the string and supply that result to the RIGHT() function:</p><img alt="select_right_query (55K)" src="https://www.navicat.com/link/Blog/Image/2023/20230118/select_right_query.jpg" height="281" width="669" /><h3>Add the Enclosing Quotation Marks</h3><p>The last step in constructing the SELECT query is to add the quotes around the product name. Having parsed out the product name, we can provide it to the REPLACE() function as the first parameter, along with the concatenated (quoted) version as the 2nd parameter:</p><img alt="replace_query (68K)" src="https://www.navicat.com/link/Blog/Image/2023/20230118/replace_query.jpg" height="332" width="698" /><p>An alternative way to achieve the same end is to simply use the CONCAT() function and feed it each part of the string as follows:</p><pre>SELECT CONCAT(         LEFT(ProductName, LENGTH('Chef Anton\'s ')),  '"', RIGHT(ProductName, LENGTH(ProductName)-LENGTH('Chef Anton\'s ')),  '"'       ) AS product_name FROM products WHERE ProductName LIKE 'Chef Anton\'s %'; </pre><h3>Convert the SELECT QUERY to an UPDATE</h3><p>All that's left to do now is to convert our SELECT query into an UPDATE.  Having executed the query as a SELECT first, we can be confident that the UPDATE statement won't affect any other rows than the ones we're interested in.  Here is the UPDATE query and results confirming that only two rows were updated:</p><img alt="update_query (60K)" src="https://www.navicat.com/link/Blog/Image/2023/20230118/update_query.jpg" height="304" width="754" /><p>Upon refreshing the products table, we can now see our updated values:</p><img alt="updated_products_table (39K)" src="https://www.navicat.com/link/Blog/Image/2023/20230118/updated_products_table.jpg" height="208" width="460" /><h1 class="blog-sub-title">Final Thoughts on How to Perform a Search and Replace in SQL</h1><p>In this blog, we learned how to update a string in a table column using a four step process. By building up the query as a series of SELECT statements, we can minimize the risk of inadvertently changing data that we did not intend to.</p></body></html>]]></description>
</item>
<item>
<title>Creating Custom Code Snippets in Navicat 16</title>
<link>https://www.navicat.com/company/aboutus/blog/2093-creating-custom-code-snippets-in-navicat-16.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Creating Custom Code Snippets in Navicat 16</title></head><body><b>Jan 9, 2023</b> by Robert Gravelle<br/><br/><p>The Code Snippets feature was introduced to all "Non-Essentials" Navicat Database Administration and Development tools in version 12. Version 16 added Code Snippets to Navicat's cloud services so that users could save their Code Snippets to the cloud and share them across Navicat products. For those of you who are unfamiliar with the Code Snippets feature, it allows you to insert reusable code into your SQL statements when working in the SQL Editor. Besides gaining access to a collection of built-in snippets, you can also define your own snippets. We've talked about Code Snippets before.  The March 14, 2018  blog, <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/693-using-navicat-code-snippets" target="_blank">Using Navicat Code Snippets</a>, provided a general overview of the Code Snippets feature. Today's blog will cover how to create your own custom Code Snippets. It's something that can make writing queries a whole lot easier!  </p><h1 class="blog-sub-title">Creating a Code Snippet from Scratch or from Selected Text</h1><p>There are a couple of ways to create a brand new Code Snippet. The first is by clicking the Create Snippet button in the Code Snippet pane: </p><img alt="new_code_snippet_button (49K)" src="https://www.navicat.com/link/Blog/Image/2023/20230109/new_code_snippet_button.jpg" height="569" width="279" /><p>If you've got a particular piece of code that you'd like to save for future use, you can select it in the Query Editor, bring up the context menu (i.e. right-click in Windows), and select Create Snippet from the menu:</p><img alt="create_snippet_in_context_menu (93K)" src="https://www.navicat.com/link/Blog/Image/2023/20230109/create_snippet_in_context_menu.jpg" height="516" width="543" /><p>That will bring up the New Snippet dialog with the Code text field filled in for you:</p><img alt="new_snippet_dialog (36K)" src="https://www.navicat.com/link/Blog/Image/2023/20230109/new_snippet_dialog.jpg" height="593" width="386" /><p>From there you can choose a category from the Label drop-down. This will allow you to filter by type later when you need to bring it up. The pre-defined choices are "Comment", "DLL", and "Flow Control".  If none of these apply, you can always add your own!  The next time you access the New Snippet dialog, it will be included in the Label drop-down.</p><p>There's also a text area for Remarks in order to add some context to the Snippet.</p><p>Finally, don't forget to give your Code Snippet a name! Here's the New Snippet dialog with all of the fields filled in:</p><img alt="new_snippet_dialog_filled_in (45K)" src="https://www.navicat.com/link/Blog/Image/2023/20230109/new_snippet_dialog_filled_in.jpg" height="593" width="386" /><p>Once saved, the new Code Snippet will appear in the Code Snippet pane: </p><img alt="new_code_snippet_in_code_snippet_pane (56K)" src="https://www.navicat.com/link/Blog/Image/2023/20230109/new_code_snippet_in_code_snippet_pane.jpg" height="618" width="273" /><h1 class="blog-sub-title">Setting Placeholders</h1><p>Placeholders are tabbable text selections that act as arguments for Code Snippets. In fact, auto-completed function arguments are themselves always displayed as placeholders! For instance, here is the Count() function:</p><img alt="placeholder_example (4K)" src="https://www.navicat.com/link/Blog/Image/2023/20230109/placeholder_example.jpg" height="33" width="315" />  <p>To create a placeholder in your Code Snippet, select the text that you'd like to be highlighted in the Code Snippet and click the <i>Set selected text as placeholder</i> button:</p><img alt="placeholder_button (24K)" src="https://www.navicat.com/link/Blog/Image/2023/20230109/placeholder_button.jpg" height="332" width="397" /><p>Placeholder text is highlighted by a colored box for easy identification:</p><img alt="code_snippet with placeholder (20K)" src="https://www.navicat.com/link/Blog/Image/2023/20230109/code_snippet%20with%20placeholder.jpg" height="324" width="386" /><p>You can remove a placeholder the same way using the <i>Remove placeholder</i> button.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to create our own custom Code Snippets in any "Non-Essentials" edition of Navicat 16. It's something that can definitely make writing queries faster and easier!</p></body></html>]]></description>
</item>
<item>
<title>Using SQL Aliases to Simplify Your Queries and Customize the Results</title>
<link>https://www.navicat.com/company/aboutus/blog/2091-using-sql-aliases-to-simplify-your-queries-and-customize-the-results.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Using SQL Aliases to Simplify Your Queries and Customize the Results </title></head><body><b>Dec 20, 2022</b> by Robert Gravelle<br/><br/><p>Aliases temporarily rename a table or a column in such a way that does not affect the underlying table(s) or view(s). As a feature of SQL that is supported by most, if not all, relational database management systems, aliases are a great way to both simplify your queries and/or customize the column headers in your result sets. In this blog, we'll do both, using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>.</p><h1 class="blog-sub-title">Renaming Columns</h1><p>A lot of database designers use abbreviations for the table column names to keep them short, for example:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>emp_no for "Employee Number"</li><li>qty for "quantity"</li></ul><p>The full value of abbreviated column names are not always intuitive to those who view the query results. To remedy that, you can use column aliases that give columns more descriptive names in the result set.</p><p>The syntax for column aliases is:</p><pre>column_name [AS] alias_name</pre><p>Note that the AS keyword is optional. </p><p>You can include whitespace in your aliases, by enclosing them within single (or double) quotes like this:</p><pre>column_name AS 'Alias Name'</pre><p>Here's an example query that includes a few column aliases:</p><img alt="column_aliases (131K)" src="https://www.navicat.com/link/Blog/Image/2022/20221219/column_aliases.jpg" height="599" width="513" /><h3>Using Aliases for Expressions</h3><p>You've probably noticed that, if a query contains expressions, the entire expression is utilized as the column header. For example:</p><img alt="query_with_expression (110K)" src="https://www.navicat.com/link/Blog/Image/2022/20221219/query_with_expression.jpg" height="500" width="583" /><p>Assigning a column alias to the expression makes it much more palatable:</p><img alt="expression_alias (111K)" src="https://www.navicat.com/link/Blog/Image/2022/20221219/expression_alias.jpg" height="501" width="539" /><h1 class="blog-sub-title">Table Aliases</h1><p>Table aliases follow the same rules as column aliases, but their purpose is different, as table aliases don't appear anywhere in the query results. In their case, the idea is to use a shorter name in order to associate columns with their table in a way that shortens your queries. </p><p>The basic syntax of a table alias is as follows:</p><pre>SELECT column1, column2....FROM table_name [AS] alias_nameWHERE [condition];</pre><p>A column that is associated to its table is called a qualified column name. Columns need to be qualified when two columns with the same name appear in the same SELECT statement.  In fact, we saw an example of qualified column names in the column alias example above. Here's another query that contains two actor_id columns - one from the actors table and another from the film_actor table:</p><img alt="qualified_columns (48K)" src="https://www.navicat.com/link/Blog/Image/2022/20221219/qualified_columns.jpg" height="185" width="577" /><p>Although the above query is perfectly functional, we can shorten it by employing table aliases:</p><img alt="table_aliases (44K)" src="https://www.navicat.com/link/Blog/Image/2022/20221219/table_aliases.jpg" height="187" width="559" /><p>Notice that unambiguous columns, i.e., those which only appear in one table, do not need to be qualified.</p><p>Another way that table aliases are helpful comes into play when using modern database tools like Navicat. Thanks to the auto-suggest feature, typing the table alias causes a drop-down of suggestions.  In the case of table aliases, the drop-down will contain all the table columns:</p><img alt="autocomplete (49K)" src="https://www.navicat.com/link/Blog/Image/2022/20221219/autocomplete.jpg" height="314" width="553" /><p>This greatly accelerates query writing, which is an important part of professional database development.</p><h1 class="blog-sub-title">Conclusion</h1><p>This blog provided an overview of column and table aliases, along with some practical examples in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>. </p><p>If you're interested in learning more about Navicat Premium 16, you can try the <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">full unrestricted version</a> out for 14 days completely free of charge!</p></body></html>]]></description>
</item>
<item>
<title>Navicat 16 and Tablespaces - Part 3</title>
<link>https://www.navicat.com/company/aboutus/blog/2087-navicat-16-and-tablespaces-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat 16 and Tablespaces - Part 3</title></head><body><b>Dec 13, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Tablespace Management</h1><p>This 3rd and final part of the Navicat 16 and Tablespaces series will focus on how to manage tablespaces in MySQL using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>. Recall that <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/2085-navicat-16-and-tablespaces-part-1.html" target="_blank">Part 1</a> presented some advantages offered by tablespaces, including Recoverability, Ease of Adding More Tables, Automatic Storage Management, and the Ability to Isolate Data in Buffer Pools for Improved Performance or Memory Utilization. The second installment provided more information on what tablespaces are, how they work and the types of default tablespaces you'll find in the various relational database products.</p><h1 class="blog-sub-title">Creating a Tablespace</h1><p>Just as Navicat provides Table and SQL Designers, there are also facilities for working with Tablespaces. To open the Tablespace Designer, click on Others -> Tablespace from the Main Toolbar:</p><img alt="tablespace_command (12K)" src="https://www.navicat.com/link/Blog/Image/2022/20221213/tablespace_command.jpg" height="126" width="268" /><p>In the Designer, click on the New Tablespace button in the Toolbar:</p><img alt="new_tablespace_command (18K)" src="https://www.navicat.com/link/Blog/Image/2022/20221213/new_tablespace_command.jpg" height="171" width="384" /><p>The fields shown in the Designer will depend on the type of database that your working with.  In the case of MySQL, you'll see the following fields:</p><ul><li>Engine drop-down: For standard MySQL 5.7 releases, only the InnoDB engine supports tablespaces, so it's the only option in the drop-down. MySQL NDB Cluster 7.5 also supports tablespaces using the NDB storage engine.</li><li>Path textbox: Specifies the path of the datafile/tempfile. Note that you have to include the ".ibd" file extension.</li><li>Block Size drop-down: The block size for the tablespace. MySQL only supports a block size of 1024, or 1 MB, so be sure to select that option from the drop-down.</li><li>Block Size Unit: The size of one data block. As mentioned above, MySQL only supports a block size of 1024, or 1 MB; for other database types, you may choose K, M, G, T, P or E to specify the size in kilobytes, megabytes, gigabytes, terabytes, petabytes, or exabytes.</li></ul><p>You can see the generated SQL statement by clicking on the SQL Preview tab:</p><img alt="new_tablespace_sql_preview_tab (18K)" src="https://www.navicat.com/link/Blog/Image/2022/20221213/new_tablespace_sql_preview_tab.jpg" height="149" width="339" /><p>Navicat will issue the CREATE TABLESPACE statement upon clicking the Save button. Here are the New Tablespace form fields after a successful Save operation:</p><img alt="new_tablespace_general_tab (31K)" src="https://www.navicat.com/link/Blog/Image/2022/20221213/new_tablespace_general_tab.jpg" height="246" width="548" /><p>Before saving the tablespace, Navicat will present a dialog for entering the tablespace name that will be utilized to display the tablespace in the Tablespaces Objects list:</p><img alt="tablespace_name_dialog (12K)" src="https://www.navicat.com/link/Blog/Image/2022/20221213/tablespace_name_dialog.jpg" height="153" width="420" /><p>Hence, entering a name of "classicmodels" will add it as seen below:</p><img alt="classicmodels_tablespace_in_objects_list (21K)" src="https://www.navicat.com/link/Blog/Image/2022/20221213/classicmodels_tablespace_in_objects_list.jpg" height="124" width="658" /><h1 class="blog-sub-title">Altering a Tablespace</h1><p>Selecting a tablespace from the Tablespaces Objects list will enable the Design Tablespace button in the toolbar for editing. If the database in question does not allow tablespace editing, as in the case of MySQL, the form fields will be disabled: </p><img alt="design_tablespace_general_tab (18K)" src="https://www.navicat.com/link/Blog/Image/2022/20221213/design_tablespace_general_tab.jpg" height="175" width="525" /><p>Otherwise, data can be modified and re-saved.</p><h1 class="blog-sub-title">Deleting a Tablespace</h1><p>Selecting a tablespace from the Tablespaces Objects list also enables the Delete Tablespace button in the toolbar. Clicking it will bring up a confirmation dialog that requires the user to check a box indicating that the Delete action is permanent and cannot be undone:</p><img alt="delete_tablespace_confirm_dialog (39K)" src="https://www.navicat.com/link/Blog/Image/2022/20221213/delete_tablespace_confirm_dialog.jpg" height="288" width="520" /><p>The user may also click the Cancel button to close the dialog without deleting the tablespace.</p><h1 class="blog-sub-title">Conclusion of Navicat 16 and Tablespaces Series</h1><p>Tablespaces allow database administrators to better control the physical storage layout by putting some tables on faster or more redundant disks, or to stripe tables across disks. This series covered both the theoretical side as well as more practical matters of tablespace management, from their creation to deletion using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>.</p></body></html>]]></description>
</item>
<item>
<title>Navicat 16 and Tablespaces - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/2086-navicat-16-and-tablespaces-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat 16 and Tablespaces - Part 2</title></head><body><b>Dec 6, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">How They Work</h1><p><i>"What is it? It's it" - Epic, Faith No More</i></p><p>Welcome back to this series on working with tablespaces in Navicat 16. Part 1 presented some advantages offered by tablespaces, including Recoverability, Ease of Adding More Tables, Automatic Storage Management, and the Ability to Isolate Data in Buffer Pools for Improved Performance or Memory Utilization. This second instalment will provide more information on what tablespaces are, how they work and the types of default tablespaces you'll find in the various relational database products. The next and final part of the series will focus on how to manage tablespaces in Navicat 16.</p><h1 class="blog-sub-title">Tablespaces As Containers</h1><p>You can think of tablespaces as containers. These can be a directory name, a device name, or a file name. A single tablespace can have several containers. And, although it is possible for multiple containers (from one or more tablespaces) to be created on the same physical storage device, you will get the best performance if each container you create utilizes a different storage device. The figure below illustrates the relationship between tables and tablespaces within a database:</p><img width="512" alt="DB2 Tablespace RAM and Disk" src="https://www.navicat.com/link/Blog/Image/2022/20221206/DB2_Tablespace_RAM_and_Disk.jpg"><h1 class="blog-sub-title">Tablespaces and the Database Manager </h1><p>The database manager's role is to balance the data load across containers. As a result, all containers are used to store data to a lesser of greater degree. At the same time, the database manager does not always start storing table data in the first container. The number of pages that the database manager writes to a container before using a different container is called the "extent size". </p><p>The figure below shows the components of a tablespace, including the extent size:</p><img width="512" alt="Oracle Table in a Tablespace" src="https://www.navicat.com/link/Blog/Image/2022/20221206/512px-Oracle_Table_in_a_Tablespace.jpg"><h1 class="blog-sub-title">Default Tablespaces</h1><p>Most relational databases come with their own built-in tablespaces. Here are a few examples:</p><h3>Oracle</h3><p>Oracle comes with the following default tablespaces: SYSTEM, SYSAUX, USERS, UNDOTBS1, and TEMP:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>The SYSTEM and SYSAUX tablespaces store system-generated objects such as data dictionary tables. You should not store any objects in these tablespaces.</li><li>The USERS tablespace is helpful for ad-hoc users.</li><li>The UNDOTBS1 holds the undo data.</li><li>The TEMP is the temporary tablespace which is used for storing intermediate results of sorting, hashing, and large object processing operations.</li></ul><h3>MySQL</h3><p>Only the InnoDB engine supports tablespaces, as follows:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>The System Tablespace</li><li>File-Per-Table Tablespaces</li><li>Undo Tablespaces</li></ul><h3>DB2</h3><p>When you create a new database, the database manager creates some default tablespaces for the database. These tablespaces are utilized as a storage for user and temporary data. Each database must contain at least three tablespaces as given here:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Catalog tablespace</li><li>User tablespace</li><li>Temporary tablespace</li></ul><h1 class="blog-sub-title">Going Forward</h1><p>That concludes the second instalment on tablespaces. This instalment provided some information on what tablespaces are, how they work and the types of default tablespaces you'll find in the various relational database products. The next and final part of the series will focus on how to manage tablespaces in Navicat 16.</p></body></html>]]></description>
</item>
<item>
<title>Navicat 16 and Tablespaces - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/2085-navicat-16-and-tablespaces-part-1.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat 16 and Tablespaces - Part 1</title></head><body><b>Nov 25, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">The Advantages</h1><p>Did you know that Navicat 16 supports tablespaces? A table space is a storage structure for tables (as well as indexes, large objects, and long data) that organizes database data into logical storage groupings that relate to where data is stored on the filesystem. It's main function is to link the physical storage layer and the logical storage layer. By assigning tables to a tablespace you can control the physical storage layout by putting some tables on faster or more redundant disks, or to stripe tables across disks. This series is split into two parts: in the first couple of blogs, we'll cover the theoretical side, specifically, what sort of advantages tablespaces offer, as well as how they work and. The second part will focus on more practical matters, i.e., how to manage tablespaces in Navicat 16.</p><h1 class="blog-sub-title">Some Advantages Offered by Tablespaces</h1><p>Besides the advantages mentioned above, tablespaces offer a few other benefits:</p><h3>Recoverability</h3><p>Putting objects into the same tablespace makes backing up and restoring the database easier, since you can backup or restore all the objects within a tablespace with a single command. Moreover, if you have partitioned tables and indexes that are distributed across tablespaces, you have the option of backing up and/or restoring only the data and index partitions that reside in a given tablespace.</p><h3>Easy to Add More Tables</h3><p>Although there are limits to the number of tables that can be stored in any one tablespace, should you have a need to store more tables than can be accommodated within a single tablespace, you can easily create additional tablespaces for them using the CREATE TABLESPACE command:</p><pre>CREATE TABLESPACE tbs1    DATAFILE 'tbs1_data.dbf'    SIZE 1m;</pre><h3>Automatic Storage Management</h3><p>Usually, you need to define and manage the tablespace containers yourself. However, certain databases, such as DB2, support automatic storage tablespaces, whereby storage is managed automatically. Creating a tablespace with the automatic storage tablespace option delegates the creation and management of containers to the database manager. </p><h3>Ability to Isolate Data in Buffer Pools for Improved Performance or Memory Utilization</h3><p>If you have a set of objects (for example, tables and indexes) that are queried frequently, you can assign the tablespace in which they reside to a buffer pool with a single CREATE or ALTER TABLESPACE statement. Temporary tablespaces can also be assigned to their own buffer pool to increase the performance of certain operations such as sorts or joins. For seldom-accessed data, or for applications that require very random access into a very large table, it might make sense to define smaller buffer pools; the data can be kept in the buffer pool for no longer than a single query.</p><h1 class="blog-sub-title">Going Forward</h1><p>This first instalment of the Navicat 16 and Tablespaces series presented several advantages offered by tablespaces. In the next blog, we'll learn more about how tablespaces work.  Finally, we'll move on to working with tablespaces in Navicat 16.</p></body></html>]]></description>
</item>
<item>
<title>Update Multiple Tables With One Statement</title>
<link>https://www.navicat.com/company/aboutus/blog/2082-update-multiple-tables-with-one-statement.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Update Multiple Tables With One Statement</title></head><body><b>Nov 17, 2022</b> by Robert Gravelle<br/><br/><p>As you well know, multiple server hits can slow down an application. For that reason, developers are keen to find the most efficient ways to update data using as few statements as possible. As it turns out, the SQL UPDATE statement does support the setting of fields from multiple tables using this syntax:</p>    <pre>UPDATE table1, table2, ...    SET column1 = value1,        column2 = value2,        ...[WHERE conditions]</pre><p> The syntax gets formed by the combination of various keywords that helps in the grouping of two or more tables, like the join keyword.</p><p>Today's blog will present an overview of the multi-table UPDATE statement along with an example using MySQL 8 and <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>.</p><h1 class="blog-sub-title">Some Caveats</h1><p>Combining two table updates into one statement is not without limitations and quirks. Here are some points to keep in mind:</p><ul><li> In the multi-table UPDATE query, each record satisfying a condition gets updated. Even if the criteria are matched multiple times, the row is updated only once.</li><li> The syntax of updating multiple tables cannot be used with the ORDER BY and LIMIT keywords.</li></ul><p>So, while the multi-table UPDATE statement is quite efficient, it is not ideal for every situation.</p><h1 class="blog-sub-title">A Practical Example</h1><p>To give the multi-table UPDATE statement a try, we'll create two tables named "library" and "book" and consider the case when one or more books are borrowed from the library. Doing so increases the count of books while decreasing the count of the books. As it turns out, that's the ideal scenario to combine two separate statements into one UPDATE query. This will avoid separate calls to the server, making it a very efficient operation.</p><p>Here are the definitions and contents of each table:</p><h3>The library Table</h3><p><img alt="library_table_definition (32K)" src="https://www.navicat.com/link/Blog/Image/2022/20221117/EN/library_table_definition.jpg" height="140" width="623" /></p><p><img alt="library_table_contents (18K)" src="https://www.navicat.com/link/Blog/Image/2022/20221117/EN/library_table_contents.jpg" height="116" width="313" /></p><h3>The book Table</h3><p><img alt="book_table_definition (38K)" src="https://www.navicat.com/link/Blog/Image/2022/20221117/EN/book_table_definition.jpg" height="159" width="618" /></p><p><img alt="book_table_contents (17K)" src="https://www.navicat.com/link/Blog/Image/2022/20221117/EN/book_table_contents.jpg" height="114" width="342" /></p><p>Here is the query that will update both tables:</p><pre>UPDATE library l, book b    SET l.book_count = l.book_count - 2,        b.book_count = b.book_count + 2WHERE l.id = b.book_idAND b.id = '1AG';</pre><p>In the above query, the <i>l.id = b.book_id</i> condition acts as an inner join which combines the two tables and operates on the combined table after checking the table constraints. Meanwhile, the <i>b.id = '1AG'</i> condition further reduces the target rows to those which pertain to user '1AG'.</p><p>Other join types like outer join and right outer join may be employed as well; the only mitigating factor is that the two tables getting grouped must have a similar/matching attribute.</p><p>As with the regular (single table) UPDATE statement, the SET keyword is used along with the UPDATE keyword to set the new values in existing rows. It causes the older values to be overwritten with the new data. We can observe the query results in Navicat below:</p><img alt="update_result (44K)" src="https://www.navicat.com/link/Blog/Image/2022/20221117/EN/update_result.jpg" height="270" width="458" /><p>As expected, the book counts for user '1AG' and book 103 have been updated in both tables:</p><img alt="updated_table_contents (27K)" src="https://www.navicat.com/link/Blog/Image/2022/20221117/EN/updated_table_contents.jpg" height="193" width="345" /><h1 class="blog-sub-title">Conclusion</h1><p>Today's blog presented an overview of the multi-table UPDATE statement along with an example using MySQL 8 and <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>. The lesson here is that the multi-table UPDATE statement works best for applying mathematical operations such as incrementing and decrementing on related table columns.</p></body></html>]]></description>
</item>
<item>
<title>Choosing between a Subquery and Join</title>
<link>https://www.navicat.com/company/aboutus/blog/2077-choosing-between-a-subquery-and-join.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Choosing between a Subquery and Join</title></head><body><b>Nov 11, 2022</b> by Robert Gravelle<br/><br/><p>In the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1704-joins-versus-subqueries-which-is-faster.html" target="_blank">Joins versus Subqueries: Which Is Faster?</a> blog article we learned that joins tend to execute faster than subqueries. Having said that, it's not a universal rule, so you may not want to automatically assume that a join will be preferable. As mentioned in that article, if you need to add many joins to a query, the database server has to do more work, which can translate to slower data retrieval times. This article will present a couple of quick tests you can perform to compare a query that employs joins to one that contains subqueries so that you can choose which performs best. </p><h1 class="blog-sub-title">Two Queries, Same Result</h1><p>Most of the time, a query can be written using joins or subqueries.  To illustrate, here is a query that selects countries, along with their associated cities and addresses from the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">MySQL Sakila Sample Database</a>. The first SELECT statement uses joins while the second one fetches the exact same data using subqueries:</p><pre>SELECT    co.Country,    COUNT(DISTINCT ci.city_id) AS city_cnt,    COUNT(a.city_id)           AS address_cntFROM country coINNER JOIN city ci    ON co.country_id = ci.country_idINNER JOIN address a    ON ci.city_id = a.city_idGROUP BY    co.country_id;SELECT     Co.Country,    (Select COUNT(1)  FROM City Ci  WHERE Ci.country_id=co.country_id) AS city_cnt,    (Select COUNT(1)  FROM Address A    INNER JOIN city c on a.city_id=c.city_id  WHERE C.country_id=co.country_id) AS address_cntFrom Country Co;</pre><p>We can easily compare the results in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, because it can run multiple queries simultaneously. Each result set is shown in its own tab below the SQL Editor. In the image below, the contents of the <i>Result 2</i> tab is shown next to <i>Result 1</i> for quick comparison:</p><img alt="country, cities, and addresses (142K)" src="https://www.navicat.com/link/Blog/Image/2022/20221111/country,%20cities,%20and%20addresses.jpg" height="827" width="695" /><h1 class="blog-sub-title">Query Execution Time</h1><p>Having verified that both statements are equivalent, we can now compare their execution times. </p><p>To do that, we can select an individual statement, and click the <i>Run</i> button, whose label changes to <i>Run Selected</i> whenever text is selected in the editor. An <i>Elapsed Time</i> of <i>0.020s</i> can be seen at the bottom of the screen: </p><img alt="join query elapsed time (138K)" src="https://www.navicat.com/link/Blog/Image/2022/20221111/join%20query%20elapsed%20time.jpg" height="875" width="695" /><p>Doing the same with the second statement yields an <i>Elapsed Time</i> of <i>0.021s</i>.  A minute difference, but one that would grow as the volume of data increases:</p><img alt="subquery elapsed time (123K)" src="https://www.navicat.com/link/Blog/Image/2022/20221111/subquery%20elapsed%20time.jpg" height="848" width="694" /><h1 class="blog-sub-title">Comparing Execution Plans</h1><p>A query's Execution Plan can reveal a lot of information about how quickly it will execute. In Navicat, we can view the Execution Plan by clicking the <i>Explain</i> button.  While it takes some practice to become adept at interpreting the results of Explain, doing so can pay dividends when trying to ascertain a query's efficiency.</p><p>The <i>Explain1</i> tab shows the Execution Plan for the first (join) query.  We can see at a glance that it involves 3 SIMPLE selects:</p><img alt="explain1 (99K)" src="https://www.navicat.com/link/Blog/Image/2022/20221111/explain1.jpg" height="535" width="674" /><p>Meanwhile, the <i>Explain2</i> tab lists one PRIMARY select, followed by three DEPENDENT SUBQUERIES. Even without digging deeper, we can already see that there is an additional step required to execute the second (subquery) statement:</p><img alt="explain2 (42K)" src="https://www.navicat.com/link/Blog/Image/2022/20221111/explain2.jpg" height="135" width="680" /><h1 class="blog-sub-title">Conclusion</h1><p>While this blog seems to confirm the conclusion reached by <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1704-joins-versus-subqueries-which-is-faster.html" target="_blank">Joins versus Subqueries: Which Is Faster?</a> article, it can be a worthwhile exercise to compare both a join and subquery approach. In any event, there are still times that a subquery is advantageous over joins, such as when you have to calculate an aggregate value on-the-fly and use it in the outer query for comparison. </p></body></html>]]></description>
</item>
<item>
<title>Some Disadvantages of Allowing Null Values in Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/2076-some-disadvantages-of-allowing-null-values-in-relational-databases.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Some Disadvantages of Allowing Null Values in Relational Databases</title></head><body><b>Nov 07, 2022</b> by Robert Gravelle<br/><br/><p>Back in 2020, we learned about <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1312-the-null-value-and-its-purpose-in-relational-database-systems.html" target="_blank">The NULL Value and its Purpose in Relational Database Systems</a>. As stated in that article, the value NULL has become a special marker to mean that no value exists. You could also say that NULL values may indicate that a column could have a value, but you don't know what that value should be yet. In that context, they act as a placeholder until you finally collect the data needed to fill the table field with a real value.</p><p>Moreover, when you consider that all major database vendors support NULLs as default values, it only makes sense to use them, doesn't it?  Well, not so fast.  There are database designers who avoid using NULLs unless absolutely necessary. Do they know something that the rest of us don't? Read on to find out! </p><h1 class="blog-sub-title">Space Considerations</h1><p>Although NULL values represent "nothing" or "no value", they are treated as a value by the database. As such, they take up space on the hard drive. So, if you think that you are saving hard drive space by employing NULL values, you could be mistaken. In fact, NULL is considered to be a variable-length value, meaning that it could be a few bytes or several bytes, depending on the column type. The database leaves room for extra bytes should the value be larger than what is stored in the field, the result being that your database might take up more hard drive space than if you had used regular values.</p><h1 class="blog-sub-title">Don't Create a Record with Missing Information</h1><p>Some database administrators argue that if all the columns of a record can't be filled, then a record shouldn't be created. This argument obviously doesn't apply to all use cases, but the idea behind it is that a record should only be created when all fields have actual values without any placeholders. For example, in a banking application, you wouldn't proceed with a transaction if you didn't know the amount of the transaction. Fair enough, but this type of rigorous standard doesn't work so well in other industries such as e-commerce or websites that collect user data.</p><h1 class="blog-sub-title">Complex SQL</h1><p>Another disadvantage affects your database stored procedures. While most databases provide functions to detect NULL values, special care must still be taken to distinguish NULLs from other values. This means that your SQL procedures might be much longer than necessary, and they can become complex to read as well. A database administrator may reject code changes if the procedures are too convoluted and/or unintelligible.</p><p>Case in point, here's a small table in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a> that contains a combination of values, empty strings, and NULLs:</p><img alt="edit_menu (46K)" src="https://www.navicat.com/link/Blog/Image/2022/20221107/edit_menu.jpg" height="390" width="326" /><p>In Navicat, it's easy to insert an empty string or NULL via the Edit menu.</p><p>Now here's a query that counts the number of names based on a variety of criteria:</p><img alt="null_query (64K)" src="https://www.navicat.com/link/Blog/Image/2022/20221107/null_query.jpg" height="679" width="383" /> <p>We were looking for a count of 5 as records 4, 5, 7, 8, and 10 do not have values in them. However, only the combo_count returned 5. This is because while a NULL value does NOT have a length, so NULLs are not picked up by the length() function. </p><p>From this example, we can conclude that allowing NULL values can make you work extra hard to get at the kind of data you are looking for. Moreover, allowing NULL values may reduce your confidence regarding the data in your database, as you can never quite be sure whether a value exists or not. </p><h1 class="blog-sub-title">Conclusion</h1><p>Most database practitioners choose to allow some NULL values in their database tables, as they are the default value in just about any well known database and function well as a placeholder for missing data. On the other hand, we saw here that some DBAs don't feel that NULLs are worth the extra trouble they entail. The moral of this story is that you should consider your own business processes before designing your database(s) and choose a structure that best suits your data.</p></body></html>]]></description>
</item>
<item>
<title>How to Backup the Database Structure Only in Navicat 16</title>
<link>https://www.navicat.com/company/aboutus/blog/2075-how-to-backup-the-database-structure-only-in-navicat-16.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>How to Backup the Database Structure Only in Navicat 16</title></head><body><b>Oct 28, 2022</b> by Robert Gravelle<br/><br/><p>Although there are few database administrators (DBAs) who do not believe in performing regular database backups, there are many opinions on how best to do so.  Whichever approach you espouse, there are many good reasons to keep a copy of the database schema. In the event of data loss, you can restore the database structure from the schema, and then populate it with the latest data backup.  </p><p>Some database vendors, such as MySQL, offer free utilities (i.e. mysqldump) for backing up the database structure on its own, while others require a specific administration tool to do so. If you're a <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat </a> user, there's no need for external tools.  While data backups may be performed using the Backup Wizard, the schema can be copied using the Data Transfer Tool.  In this blog, we'll learn how!</p><h1 class="blog-sub-title">About the Data Transfer Tool</h1><p>The Navicat Data Transfer Tool is a wizard-driven process that helps you to transfer tables, collections or other objects from one database/schema to another, or to a SQL/script file via a series of screens.  The target database/schema can reside on the same server or on a remote server. In Navicat Premium, you can also transfer objects across server types, e.g. from MySQL to SQL Server. Only MongoDB does not support transferring to other server types, due to it being a NoSQL document database, as opposed to a traditional relational database.</p><p>You'll find the command to launch the Data Transfer Tool under <i>Tools > Data Transfer</i> in the Main Menu:</p><img alt="data_transfer_menu_command (41K)" src="https://www.navicat.com/link/Blog/Image/2022/20221028/data_transfer_menu_command.jpg" height="271" width="393" /><h1 class="blog-sub-title">Source and Target Screen</h1><p>The first screen is where you provide the Source Connection and Database/Schema and Target.  The Target may be another connection or an SQL File that you can execute to rebuild the database schema later.</p><p>We'll specify the File option, and choose a location and name for the SQL/script file:</p><img alt="Source and Target Screen (80K)" src="https://www.navicat.com/link/Blog/Image/2022/20221028/Source%20and%20Target%20Screen.jpg" height="637" width="886" /><h1 class="blog-sub-title">Options Screen</h1><p>At the bottom of the Source and Target Screen, you'll see a button for choosing various options, including Table, Record, and Other options.</p><p>To backup the database structure only, we simply need to uncheck the <i>Create records</i> option as shown in the image below:</p><img alt="Options Screen (73K)" src="https://www.navicat.com/link/Blog/Image/2022/20221028/Options%20Screen.jpg" height="627" width="802" /><h1 class="blog-sub-title">Database Objects Screen</h1><p>On the Database Objects Screen, we can choose which tables, views, procedures/functions, and events to backup.  If we do not select anything here, an empty database will be backed up, without any objects.</p><img alt="Database Objects Screen (80K)" src="https://www.navicat.com/link/Blog/Image/2022/20221028/Database%20Objects%20Screen.jpg" height="637" width="943" /><h1 class="blog-sub-title">Summary Screen</h1><p>The last screen in the process provides a summary of your choices along the way, so that you can verify them before clicking the Start button.  Should you change your mind about anything, you can click the Back button to return to the relevant screen.</p><p>You'll also find a couple of common options there for quick selection:</p><img alt="Summary Screen (58K)" src="https://www.navicat.com/link/Blog/Image/2022/20221028/Summary%20Screen.jpg" height="637" width="875" /><h1 class="blog-sub-title">Progress Screen</h1><p>The Progress Screen displays every step of the backup along with a summary of transferred objects, errors, and the elapsed time:</p><img alt="Progress Screen (115K)" src="https://www.navicat.com/link/Blog/Image/2022/20221028/Progress%20Screen.jpg" height="637" width="875" /><h1 class="blog-sub-title">Conclusion</h1><p>Keeping a copy of the database schema is always a good idea so that, you can restore the database structure from the schema, and then populate it with the latest data backup, in the event of data loss.  Although some database vendors, such as MySQL, offer free utilities (i.e. mysqldump) for backing up the database structure on its own, an even easier option is to use <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat </a>'s Data Transfer Tool.  It can transfer tables, collections or other objects from one database/schema to another, or to a SQL/script file via a series of screens!  </p></body></html>]]></description>
</item>
<item>
<title>Emulating Outer Joins In MySQL</title>
<link>https://www.navicat.com/company/aboutus/blog/2073-emulating-outer-joins-in-mysql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Emulating Outer Joins In MySQL</title></head><body><b>Oct 24, 2022</b> by Robert Gravelle<br/><br/><p>Last week's article shed some light on the Outer Joins in SELECT queries. It's a JOIN type that returns both matched and unmatched rows from related tables.  Unfortunately, it is not supported by all database (DB) vendors, including MySQL. But that's OK, because Outer Joins can be emulated by combining three other JOIN types, namely LEFT, INNER, and RIGHT joins. In this article, we'll learn more about LEFT and RIGHT joins and how, when combined with an INNER JOIN, they create an OUTER JOIN.</p><h1 class="blog-sub-title">The LEFT Join</h1><p>The LEFT JOIN returns all rows from the left table and the matching rows from the right table. If no matching rows are found in the right table, a NULL is returned. Here's the syntax:</p><pre>SELECT    select_listFROM    T1LEFT JOIN T2 ON    join_predicate;</pre>    <p>The following VEN diagram illustrates what data is fetched from two tables T1 and T2 using the LEFT JOIN clause:</p>    <img alt="left_join_diagram (31K)" src="https://www.navicat.com/link/Blog/Image/2022/20221024/left_join_diagram.jpg" height="263" width="413" /><p></p><h1 class="blog-sub-title">The RIGHT Join</h1><p>The RIGHT JOIN returns all rows from the right table and the matching rows from the left table. If no matching rows are found in the left table, a NULL is returned. Here's the syntax for that:</p><pre>SELECT     select_listFROM     T1RIGHT JOIN T2 ON join_predicate;</pre><p>The following VEN diagram illustrates what data is fetched from two tables T1 and T2 using the RIGHT JOIN clause:</p><img alt="right_join_diagram (18K)" src="https://www.navicat.com/link/Blog/Image/2022/20221024/right_join_diagram.jpg" height="263" width="413" /><h1 class="blog-sub-title">Combining Joins to Emulate an OUTER JOIN</h1><p>It is common knowledge throughout the database community that MySQL lacks support for FULL OUTER JOIN. One common workaround to this short-coming is to use a UNION ALL to combine three result sets from a LEFT JOIN, an INNER JOIN, and a RIGHT JOIN of two tables, where a <i>join_column IS NULL</i> condition is added to the LEFT and RIGHT joins.</p><p>To demonstrate how to emulate an OUTER JOIN as described above, we'll write a query against the same Project Management database as last week in the Understanding SQL Outer Joins article, but in MySQL this time.</p><h3>Finding Unmatched Records in the Left Table</h3><p>This first query will return rows that are found only in the left table. The query below achieves this effect by using LEFT join with a WHERE clause that specifies that the common (joining) column in the right table is null:</p><img alt="pm_Left_join_query (37K)" src="https://www.navicat.com/link/Blog/Image/2022/20221024/pm_Left_join_query.jpg" height="304" width="381" /><h3>Finding Unmatched Records in the Second Table</h3><p>The second query will return rows that are found only in the right table. To do that, we'll use a RIGHT join with a WHERE clause that specifies that the common (joining) column in the left table is null:</p><img alt="pm_Right_join_query (40K)" src="https://www.navicat.com/link/Blog/Image/2022/20221024/pm_Right_join_query.jpg" height="317" width="383" /><h3>Finding Matched Records in Both Tables</h3><p>To find records that appear in both tables, we can use a standard (INNER) JOIN like the following:</p><img alt="pm_Inner_join_query (45K)" src="https://www.navicat.com/link/Blog/Image/2022/20221024/pm_Inner_join_query.jpg" height="345" width="382" /><p>When combined using UNION ALL, the three separate queries produce the same results as an OUTER JOIN: </p><img alt="pm_query (140K)" src="https://www.navicat.com/link/Blog/Image/2022/20221024/pm_query.jpg" height="747" width="555" /><h1 class="blog-sub-title">Conclusion</h1><p>In this article, we learned more about LEFT and RIGHT joins and how, when combined with an INNER JOIN, they create an OUTER JOIN.  Like last week, there is again a caveat.  This technique can be quite inefficient on large tables when used with ORDER BY and/or LIMIT queries, as these utilize a filesort. In such cases, you may want to employ another approach.</p></body></html>]]></description>
</item>
<item>
<title>Understanding SQL Outer Joins</title>
<link>https://www.navicat.com/company/aboutus/blog/2069-understanding-sql-outer-joins.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Understanding SQL Outer Joins</title></head><body><b>Oct 11, 2022</b> by Robert Gravelle<br/><br/><p>The Outer Join is the least understood of all the SQL Join types. Perhaps it's because Outer Joins are required somewhat less often than other join types. In any case, there is nothing inherently strange about Outer Joins. As we'll see in this blog article, a few examples of the Outer Join in action should be enough to clarify any misapprehensions and/or confusion you may have about them.</p><p>This blog will first describe the syntax and purpose of the Outer Join statement, which will then be followed by some illustrative examples. </p><h1 class="blog-sub-title">OUTER JOIN Syntax</h1><p>The OUTER JOIN (or FULL OUTER JOIN if you like) keyword returns all records of two joined tables when there is a match in either the left (table A) or right (table B) table records. The following VEN diagram depicts the potential matches and OUTER JOIN syntax:</p><img alt="outer_join_diagram (12K)" src="https://www.navicat.com/link/Blog/Image/2022/20221010/outer_join_diagram.jpg" height="168" width="254" /><p>Hence, FULL OUTER JOIN returns unmatched rows from both tables, as well as matched rows in both tables. In other words, rows are returned by the query regardless of whether or not the join field (Clave) value is matched across both tables.</p><p>Still confused. Don't worry, we'll go over some examples in the next section to clear things up.</p><h1 class="blog-sub-title">OUTER JOINs In Practice</h1><p>In this tutorial we will use the well-known <a class="default-links" href="http://downloads.alphasoftware.com/a5v12Download/northwindmysql.zip" target="_blank">Northwind sample database</a>.</p><p>The following SQL statement selects all customers, and all orders:</p><pre>SELECT Customers.CustomerName, Orders.OrderIDFROM CustomersFULL OUTER JOIN Orders ON Customers.CustomerID=Orders.CustomerIDORDER BY Customers.CustomerName;</pre><p>One of the hallmarks of a result set produced by an OUTER JOIN query is that you will see Null values in either joined columns as one may appear in one table, but not the other.  We can observe that here in this screen capture of the above query and results in <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium 16</a>:</p><img alt="outer_join_query1 (74K)" src="https://www.navicat.com/link/Blog/Image/2022/20221010/outer_join_query1.jpg" height="592" width="487" /><p>Or course, you will never see Nulls in both table columns because a value must appear in at least one table. It should also be noted that the present of a Null in the ContactName columns is problematic because it means that an order was placed that is not associated to an existing customer. This would point to a flaw in the database design, most likely a missing Foreign Key Constraint.</p><p>Our second example fetches data from a Project Management database, namely project managers and projects. Here's the SQL:</p><pre>SELECT     m.name member,     p.title projectFROM     pm.members m    FULL OUTER JOIN pm.projects p         ON p.id = m.project_id;</pre>        <p>Again, we can see Null values (at least one Null) </p><img alt="outer_join_query2 (30K)" src="https://www.navicat.com/link/Blog/Image/2022/20221010/outer_join_query2.jpg" height="289" width="294" /><p>In this case, the results indicate that Jack Daniel has no projects at the moment.  Whether or not this represents an issue would depend on that organization's particular operations.  It may be perfectly reasonable for a Project Manager to be without a project, or for projects to be unnasigned, at any given time.</p><h1 class="blog-sub-title">Conclusion</h1><p>Hopefully today's blog helped shed some light on the purpose and usage of Outer Joins in your queries. One final word of warning: Outer Joins can result in very large result sets, so use them sparingly, and include filtering clauses such as WHERE to minimize the number of rows returned. </p></body></html>]]></description>
</item>
<item>
<title>Storing Enums In a Database</title>
<link>https://www.navicat.com/company/aboutus/blog/2068-storing-enums-in-a-database.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Storing Enums In a Database</title></head><body><b>Oct 5, 2022</b> by Robert Gravelle<br/><br/><p>In the realm of Information Technology, or IT as it's more commonly known, an enum is a special data type that encapsulates a set of predefined constants. As such, the variable may only hold one of the values that have been predefined for it. Common examples include compass directions of NORTH, SOUTH, EAST, and WEST or the days of the week.</p><p>One of the complicating factors when storing enums in a database table is that their values may be numeric or alphabetic (i.e. strings). Moreover, you'll want to prevent users from adding any values to the table that are not part of the permissible enum set. We'll be addressing both of these issues in today's blog. </p><h1 class="blog-sub-title">Enum Values Explored</h1><p>The most basic enums contain a set of zero-based ordinal values each represented by a constant, seen below in Java:</p><pre>public enum Day {    SUNDAY, MONDAY, TUESDAY, WEDNESDAY,    THURSDAY, FRIDAY, SATURDAY }</pre><p>More complex enums may also contains other types; strings are the most common, but more complex objects are also supported.  Here is an enum for representing different environment URLs (also in Java):</p><pre>public enum Environment {    PROD("https://prod.domain.com:1088/"),     SIT("https://sit.domain.com:2019/"),     CIT("https://cit.domain.com:8080/"),     DEV("https://dev.domain.com:21323/");     private String url;     Environment(String envUrl) {        this.url = envUrl;    }     public String getUrl() {        return url;    }}</pre><p>Generally, it is considered bad practice to store enumerations as numerical ordinal values, as it makes debugging and support difficult. It's usually preferable to store the actual enumeration value converted to string. To illustrate, imagine that we had an enum of card suits:</p><pre>public enum Suit {   Spade,   Heart,   Diamond,   Club }</pre><p>Now imagine that you are a database practitioner trying to decipher either of the following query results:</p><pre>Name          Suit------------  ----John Smith    2Ian Boyd      1Name          Suit------------  -------John Smith    DiamondIan Boyd      Heart</pre><p>I think that you will agree that the latter is much easier to interpret as the first option requires getting at the source code and finding the numerical values that were assigned to each enumeration member.</p><p>Although storing strings takes more disk space, enumeration member names tend to be short, and hard drives are cheap, making the trade-off worth while to make your day-to-day job easier.</p><p>Another problem with using numerical values is that they are difficult to update; you cannot easily insert or rearrange the members without having to force the old numerical values. For example, adding a value of <i>Unknown</i> to the Suit enumeration would require you to update it to:</p>public enum Suit {   Unknown = 4,  Heart = 1,  Club = 3,  Diamond = 2,  Spade = 0 }      <p>...in order to maintain the legacy numerical values already stored in the database.</p><h1 class="blog-sub-title">Validating Enum Values In the Database</h1><p>Many modern databases, including MySQL and SQL Server, support the ENUM data type. Specified as strings, ENUM values are automatically encoded as numbers when stored for compact storage. </p><p>Here are the MySQL statements in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> to create and populate a table with shirts and their sizes, as well as a SELECT query that fetches medium sized shirts:</p><img alt="mysql_enum (57K)" src="https://www.navicat.com/link/Blog/Image/2022/20221005/mysql_enum.jpg" height="322" width="652" /><p>If we now try to insert an invalid ENUM value, we get the following error:</p><img alt="mysql_enum_error (52K)" src="https://www.navicat.com/link/Blog/Image/2022/20221005/mysql_enum_error.jpg" height="280" width="701" /><p>Although the message states the the value was truncated, the data was not actually inserted.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we explored how to with enum values in the database, including how to store, validate, insert, and retrieve them.</p><p>Interested in trying <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>?  You can use it for 14 days for free!</p></body></html>]]></description>
</item>
<item>
<title>Choosing a Primary Key - Part 3</title>
<link>https://www.navicat.com/company/aboutus/blog/2067-choosing-a-primary-key-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Choosing a Primary Key - Part 3</title></head><body><b>Sep 14, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Strings as Primary Keys</h1><p>In this third and final installment of this series on choosing a Primary Key for relational databases we'll be examining some of the reasons for employing string data as a Primary Key (PK). Recall that, in <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/2058-choosing-a-primary-key-part-1.html" target="_blank">Part 1</a>, we covered Natural and Surrogate Primary Keys and considered why one might choose one over the other. <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/2059-choosing-a-primary-key-part-2.html" target="_blank">Part 2</a> explored String and Numeric data types as Primary Keys in an effort to ascertain whether one is preferable to the other. Now it's time to set the record straight and conclude whether or not string - or alphabetic - data can make a suitable PK.</p><h1 class="blog-sub-title">Ready-made Keys</h1><p>There is often a sensible natural primary key for your data which has a universal meaning and may not be an integer. If so, then adding an artificial key just for the sake of an integer type adds nothing but redundancy. Performance may be hampered slightly, but it's slightly less important than correctness, integrity, and appropriate modelling, in the estimation of many database developers. </p><p>A non-obvious benefit of alphabetic keys is that a short symbolic string can simplify debugging by being immediately human-readable in data dumps (without additional joins). For example, US states have an alphabetic code which is unique and is meaningful as a key outside the schema. Also, countries have alphabetic ISO codes. And there are an endless number of other examples, such as vehicle VIN numbers, invoice IDs, etc.</p><h1 class="blog-sub-title">Using GUIDs as Primary Keys</h1><p>If you have more than one database, auto-incrementing keys can cause redundant records. One way around this problem is to use GUIDs. Short for "Globally Unique IDentifier", GUID is a 16 byte binary data type that is guaranteed to be unique across tables, databases, and even servers.</p><p>How you create a GUID varies across different databases, but, in SQL Server, the NEWID() function is used as shown below:</p><pre>SELECT NEWID()</pre><p>Here's a statement for creating a table with the UNIQUEIDENTIFIER data type. To set a default value for the column we will use the default keyword and set the default value as the value returned by the NEWID() function:</p><pre>USE EngDBGO CREATE TABLE EnglishStudents1(Id UNIQUEIDENTIFIER PRIMARY KEY default NEWID(),StudentName VARCHAR (50) )GO INSERT INTO EnglishStudents1 VALUES (default,'Shane')INSERT INTO EnglishStudents1 VALUES (default,'Jonny')</pre><p>This will ensure that whenever a new record is inserted in the EngStudents1 table, by default, the NEWID() function generates a unique value for the Id column. When inserting the records, we simply have to specify "default" as value for the first column. This will insert a default unique value to the Id column.</p><h3>GUIDs vs. UUIDs</h3><p>While GUIDs (as used by Microsoft) and UUIDs (as defined by RFC4122) look similar and serve similar purposes, there are subtle-but-occasionally-important differences. First, let's establish what UUIDs are.</p><p>UUIDs are 128 bits values, with textual representation in hex digits. It bears repeating, because many people think that UUIDs are stored as text. UUIDv4 values being random, you don't have a guaranteed uniqueness. However, the probability of a collision is rather small.  Also, keep in mind that in the extremely unlikely case of colliding UUIDs, it will be caught by the DB thanks to the primary key constraint. You'll also be happy to know that UUIDv4 are perfectly well indexed by most relational DBs.</p><p>Some Microsoft GUID docs allow GUIDs to contain any hex digit in any position, while RFC4122 requires certain values for the version and variant fields. Also, GUIDs should be all-upper case, whereas UUIDs should be "output as lower case characters and are case insensitive on input". This can lead to incompatibilities between code libraries. </p><p>You could say that GUID is Microsoft's implementation of the UUID standard. Treat them as a 16 byte (128 bits) value that is used as a unique value. In Microsoft-speak they are called GUIDs, but call them UUIDs when not using Microsoft-speak.</p><h1 class="blog-sub-title">The Verdict</h1><p>In this third and final installment of this series on choosing a Primary Key for relational databases, we set out to decisively conclude whether or not string - or alphabetic - data can make a suitable PK. The answer is YES, but with some caveats. You should expect to take a slight performance hit. If that prospect does not deter you, then you've got some options to work with, from short symbolic strings to GUIDS and UUIDs. Finally, try to use fixed length strings rather than varchars, as fixed length strings perform better.</p></body></html>]]></description>
</item>
<item>
<title>Choosing a Primary Key - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/2059-choosing-a-primary-key-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Choosing a Primary Key - Part 2</title></head><body><b>Aug 23, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">String vs. Numeric Data Types as Primary Keys</h1><p>Welcome back to this series on choosing a Primary Key for relational databases. In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/2058-choosing-a-primary-key-part-1.html" target="_blank">Part 1</a>, we covered Natural and Surrogate Primary Keys and considered why one might choose one over the other. Today's instalment will explore String and Numeric data types as Primary Keys in an effort to ascertain whether one is  preferable to the other.</p><h1 class="blog-sub-title">String and Numeric Data Types in Relational Databases</h1><p>Both string and numeric nomenclatures are actually umbrella terms that encapsulate several different data types.  For starters, the string data type is a generic IT term that traditionally refers a sequence of characters, either as a literal constant or as some kind of variable. With regards to databases, single characters, represented by the CHAR type, are also grouped with Strings. Other DB string data types include VARCHAR, BINARY, VARBINARY, BLOB, TEXT, ENUM, and SET.  Numeric data types include both exact numeric data types such as INTEGER, SMALLINT, DECIMAL, and NUMERIC, as well as the approximate numeric data types like FLOAT, REAL, and DOUBLE PRECISION.</p><h1 class="blog-sub-title">The Great Debate</h1><p>Advice on what data type works best for primary keys (PKs) abounds on the Internet. Some sites state outright that numeric keys are almost always superior to character-based ones, while an equal number of sites promote the use of string types. Meanwhile, DB vendors themselves don't suggest one type over the other. What they do offer, are instructions regarding the PRIMARY KEY Constraint. It uniquely identifies each record in a table and posits that:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Primary keys must contain UNIQUE values, and cannot contain NULL values.</li><li>A table can have only ONE primary key; and in the table, this primary key can consist of single or multiple columns (fields). </li><li>PK values should not be changed over time.</li></ul><p>So long as your PK satisfies the above criteria, then you're good to go, as far as DB vendors are concerned. But that doesn't mean that one type may offer some advantages over the other.  Let's dive into those now.</p><h3>Say Aye for Numeric Types</h3><p>Back when I was first learning about database development, I was instructed that numeric types are best for PKs because they are both faster and memory efficient. This opinion was reinforced by my first employer, the Federal Government, who utilized numeric PKs, even if that meant adding a surrogate key.</p><p>There are plenty of reputable reference sites who echo that sentiment. Speaking about MySQL, <a class="default-links" href="https://www.mysqltutorial.org/mysql-primary-key/" target="_blank">Mysqltutorial.org</a> states:</p><blockquote>Because MySQL works faster with integers, the data type of the primary key column should be the integer e.g., INT, BIGINT. And you should ensure sure that value ranges of the integer type for the primary key are sufficient for storing all possible rows that the table may have.</blockquote><p>MySQL is far from unique on its handling of numeric data; <a class="default-links" href="https://www.oracletutorial.com/oracle-basics/oracle-primary-key/" target="_blank">another page</a> on Primary Keys in Oracle states that "primary keys typically are numeric because Oracle typically processes numbers faster than any other data types."</p><p>They even go so far as to say that PK data should be "meaningless":</p><blockquote>Sometimes, you may want use meaningful data, which considers being unique, for the primary keys e.g., social security number (SSN), vehicle identification number (VIN), email, and phone number. However, you dont know when the email or phone number changes or is reused by another person. In such cases, it will create many data problems. In the database world, the artificial keys are known as surrogate keys which are as opposed to natural primary keys.</blockquote><h1 class="blog-sub-title">Coming Up Next Week...</h1><p>So far, it would seem that Numeric primary keys are best.  However, we have not yet heard from the pro-string side. Perhaps they can offer some very good reasons for using strings instead.</p></body></html>]]></description>
</item>
<item>
<title>Choosing a Primary Key - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/2058-choosing-a-primary-key-part-1.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Choosing a Primary Key - Part 1</title></head><body><b>Aug 12, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Natural vs. Surrogate Keys</h1><p>One of the first decisions you'll be faced with as a database designer is what kind of Primary Key (PK) to use on your tables. If you ask anyone who works with databases on a daily basis, whether database administrator, developer, or tester, you'll get a myriad of opinions and justifications to go along with them.  Compounding the impediments to coming up with an answer is that there is no one size fits all solution. With that in mind, this series will present some reasons both for and against different types of PKs. Somewhere in all those ideas, there will be a few that will steer you towards the best type of PK to use for your organizational needs. In this first instalment, we'll compare the two basic types of PKs: Natural and Surrogate Keys. Later, we'll cover the questions of whether or not to use the database Auto Increment feature as well as which data type(s) - if any - make the best PKs.</p>   <h1 class="blog-sub-title">Natural Keys</h1><p>A natural key is one made up of one or more columns that already exist in the table (e.g. they are attributes of the entity within the data model) that uniquely identify a record in the table.  Since these columns are attributes of the entity they inherently <strong>possess business meaning</strong>. The following is an example of a table with a natural key in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>'s Table Designer.  We can easily identify the Primary Key by the key icon in the Key column:</p><img alt="natural_key (110K)" src="https://www.navicat.com/link/Blog/Image/2022/20220812/natural_key.jpg" height="419" width="711" /><p>Looking at the data, we can see that the <i>productCode</i> has business meaning:</p><img alt="productCode (202K)" src="https://www.navicat.com/link/Blog/Image/2022/20220812/productCode.jpg" height="479" width="774" /><h1 class="blog-sub-title">Surrogate Keys</h1><p>A surrogate key (or synthetic key, pseudokey, entity identifier, factless key, technical key, etc!) is a system generated (GUID, sequence, unique identifier, etc.) value with <strong>no business meaning</strong> that is used to uniquely identify a record in a table.  The key itself could be made up of one or multiple columns (i.e. Composite Key) as well. We can see a surrogate key in a table from the same database, which defines a <i>customerNumber</i> column as its PK:</p><img alt="surrogate_key (133K)" src="https://www.navicat.com/link/Blog/Image/2022/20220812/surrogate_key.jpg" height="503" width="761" /><p>Although not Auto Incrementing, it's a numeric field that is unrelated to the customer entity:</p><img alt="customerNumber (163K)" src="https://www.navicat.com/link/Blog/Image/2022/20220812/customerNumber.jpg" height="479" width="772" /><h1 class="blog-sub-title">Making a Decision</h1><p>So why does one table employ a Natural Key while the other utilizes a Surrogate Key? </p><p>It's quite common for products to have some sort of unique inventory number, which makes an ideal PK. Adding an additional numeric key would simply be a waste of disk space and would almost certainly require an additional index on the <i>productCode</i> column for searching. On the other hand, customers don't typically come with unique identifiers.  Speaking as someone who has had to uniquely identify persons in a database, it takes a surprisingly long list of columns to do so. Hence, it's usually much easier to assign a numeric Surrogate Key that index every column in the table.  </p><h1 class="blog-sub-title">Conclusion to Natural vs. Surrogate Keys</h1><p>In this first instalment on Choosing a Primary Key, we explored Natural and Surrogate Primary Keys and considered why one might choose one over the other. It's important to decide between using a Natural or Surrogate first because which you choose will help answer some of the follow-up questions as well - especially in the case of a surrogate key.</p><p>If you'd like to give Navicat 16 a test drive, you can download a 14 day trial <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">here</a>.</p></body></html>]]></description>
</item>
<item>
<title>Exploring Some Prevalent Stored Procedure Myths</title>
<link>https://www.navicat.com/company/aboutus/blog/2056-exploring-some-prevalent-stored-procedure-myths.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Exploring Some Prevalent Stored Procedure Myths </title></head><body><b>Aug 5, 2022</b> by Robert Gravelle<br/><br/><p>Application developers have long held the belief that housing database operations within stored procedured yielded optimum performance and guarded against SQL Injection attacks.  It was also thought that these advantages were worth the extra costs associated with maintenance, testing, and migration of database logic to a different vendor. In recent years, the tide has been turning away from stored procedures - or procs - towards Object-relational Mappers (ORM) such as Hibernate or Entity Framework as developers have begin to question these long-standing assumptions. </p><p>The <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/2053-are-stored-procedures-an-outdated-tool.html" target="_blank">Are Stored Procedures an Outdated Tool?</a> article highlighted a few reasons for eschewing stored procedures in favor of application code and ORMs. This week, we'll explore the two myths introduced above and see if they still stand up to scrutiny today.</p><h1 class="blog-sub-title">Performance Advantages</h1><p>In the early days of the Internet, it was common practice to minimize network traffic in order to boost performance. Stored procedures helped reduce network traffic by requiring only the proc name and parameters to be transferred over to the server rather than the full SQL statement. Considering the complexity and length of some production queries, these gains could sometimes be substantial. Today, whatever gains you may garner from this approach are easily offset by the fact that you are all too likely to end up calling the same procedure two or three times with the same parameters in the same request. Meanwhile, an ORM would look in its Identity Map and recognize that it already retrieved that result set, so there's no need to do another round trip. Moreover, it should be noted that the claim that stored procedures are cached on the server, whereas ad-hoc SQL is not, is a myth that was busted by Frans Bouma in his blog post, <a class="default-links" href="https://weblogs.asp.net/fbouma/38178" target="_blank">Stored Procedures are bad, m'kay?</a>.</p><h1 class="blog-sub-title">Stored Procedures and SQL Injection</h1><p>It has often been said that stored procedures offer natural protection against SQL injection because they separate data from instructions. This is true, as long as the developer doesn't use dynamic SQL within the stored procedure where a raw string is passed via the input parameter that replaces the placeholder. Here's a badly written proc that shows exactly how it could open up the database to SQL injection:</p><pre>create procedure GetStudents(@School nvarchar(50))asbegin    declare @sql nvarchar(100)    set @sql = 'SELECT STUDENT FROM SCHOOL WHERE SCHOOL LIKE ' + @School    exec @sqlend</pre><p>You can write SQL that eliminates SQL injection vulnerabilities by using parameterized queries. Written in a programmatic language such as python, TypeScript, or Java, a prepared statement like the one below can sanitize user input so that it's safe to use in your queries:</p><pre>String sql = "SELECT STUDENT FROM SCHOOL WHERE SCHOOL LIKE ? ";PreparedStatement prepStmt = conn.prepareStatement(sql);prepStmt.setString(1, "Waterloo%");ResultSet rs = prepStmt.executeQuery();</pre><p>The lesson here is that protection against SQL injection is not a benefit of stored procedures themselves, but rather the convention of not concatenating SQL strings together.</p><h1 class="blog-sub-title">Going Forward</h1><p>This blog explored a couple of long-held assumptions about stored procedures that don't quite hold true today. While not by themselves sufficient reason to hop off the stored procedure bandwagon, they do strongly suggest that it may be time to reevaluate your application architecture.</p></body></html>]]></description>
</item>
<item>
<title>What's New in Navicat 16.1 - Part 4</title>
<link>https://www.navicat.com/company/aboutus/blog/2055-what-s-new-in-navicat-16-1-part-4.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>What's New in Navicat 16.1 - Part 4</title></head><body><b>Jul 21, 2022</b> by Navicat<br/><br/><h1 class="blog-sub-title" style="font-size: 24px;">Data Synchronization</h1><p style="margin: 15px 0;">You are now able to choose not to preview the results and deploy directly. Navicat now offers two buttons:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px; margin-bottom: 15px;"><li>Compare & Preview: preview the comparison results.</li><li>Compare & Deploy: skip the preview and deploy immediately.</li></ul><img src="/link/Blog/Image/2022/20220723/Data_Sync.png" style="max-width: 100%;" /><h1 class="blog-sub-title" style="font-size: 24px;">Dump SQL File</h1><p style="margin: 15px 0;">Weve enhanced our Dump SQL File feature. The file created from Dump SQL File can now be opened in three ways:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px; margin-bottom: 15px;"><li>Open: open the file using Query Editor.</li><li>Open with External Editor: open the file with different editors.</li><li>Open Containing Folder: open the folder where the file located.</li></ul><img src="/link/Blog/Image/2022/20220723/SQL_Dump.png" style="max-width: 100%;" /><!--<h1 class="blog-sub-title" style="font-size: 24px;">Miscellaneous</h1><p style="margin: 15px 0;">We think you will appreciate the improvement for asking to save your query before closing the window. Previously, this option was disabled by default. However, we now activate this option as default setting.</p><img src="/link/Blog/Image/2022/20220723/Ask_Close.png" style="max-width: 100%;" />--></body></html>]]></description>
</item>
<item>
<title>Are Stored Procedures an Outdated Tool?</title>
<link>https://www.navicat.com/company/aboutus/blog/2053-are-stored-procedures-an-outdated-tool.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Are Stored Procedures an Outdated Tool?</title></head><body><b>Jul 27, 2022</b> by Robert Gravelle<br/><br/><p>Stored procedures have been falling out of favour with some organizations for several years now. The preferred approach of these businesses for accessing their database(s) is to employ an Object-relational Mapper (ORM) such as NHibernate or Entity Framework. Over the next couple of blog articles, we'll explore their reasons for doing so, and whether this paradigm shift points to the eventual obsolescence of Stored Procedures.</p><h1 class="blog-sub-title">Stored Procedure Basics</h1><p>As expressed in the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1012-understanding-stored-procedures-and-functions-in-relational-databases.html" target="_blank">Understanding Stored Procedures and Functions in Relational Databases</a>:</p><blockquote>A stored procedure - or "proc" for short - is a set of Structured Query Language (SQL) statements with an assigned name, which are stored in a relational database management system as a group, so it can be reused and shared by multiple programs. Stored procedures can access or modify data in a database, but it is not tied to a specific database or object. This loose coupling is advantageous because it's easy to reappropriate a proc for a different but similar purpose.</blockquote><p>Sounds like a useful tool so far, but, as we'll see in the next section, not everyone is convinced.</p><h1 class="blog-sub-title">Drawbacks of Stored Procedures</h1><p>Despite their long-established advantages, opponents of Stored Procedures point out their many disadvantages, such as:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Bug Prone: Since stored procedures encapsulate application logic, it should be moved into the application code, where it can be better managed and tested. Due to the inherent challenges in testing stored procs, they can be the cause of some really nasty bugs.</li><li>Implementation Differences: Stored procedure implementations vary from vendor to vendor. While many DB developers consider Oracle's stored procedures to be of the highest quality, other products' procedures, such as those of MySQL, are less well conceived.</li><li>Changing Requirements: One of the original use cases for stored procedures was to reduce network traffic. However, with today's lightning fast network speeds, this isn't nearly as big an issue as it once was.  As such, dropping application logic into stored procedures can be a case of premature optimization. </li><li>Difficult to Maintain: Stored procedures tend to require much more work to develop and maintain than application code. For starters, you need to have individual stored procedures to execute create, retrieve, update and delete operations for each table, plus a separate stored procedure for each different query that you wish to make. Then, you need to implement classes and/or methods in your code to call each stored procedure. Compare that with an O/R mapper, where all that's needed are class definitions, database table, and mapping file. In fact, modern ORMs use a convention-based approach that eliminates the need for a separate mapping definition.</li><li>Code Duplicaton: Stored procedures require you to violate DRY (Don't Repeat Yourself) principle, since you have to reference database table columns half a dozen times or more. Moreover, it isn't possible to pass an object as a parameter to most stored procedures - only simple types like string, integer, date/time, etc. - making it virtually impossible to avoid huge parameter lists (a dozen or more is common!).</li></ul><p>Even the most staunch opponents of stored procedures still use them in some circumstances. For example, stored procs are great for database housekeeping or reporting. Otherwise, developers should have very good reasons to integrate them into their applications.</p><h1 class="blog-sub-title">Going Forward</h1><p>Having heard a few reasons for eschewing stored procedures in favor of application code and Object-relational Mappers (ORMs) such as NHibernate or Entity Framework, you may be convinced that this is the way to go. Well, don't make the switch just yet; in the next installment, we'll consider a few more motivations for both abandoning and staying with stored procs. Then, if you still want to make the change, at least you'll be armed with a more complete understanding of all the issues involved.</p></body></html>]]></description>
</item>
<item>
<title>What's New in Navicat 16.1 - Part 3</title>
<link>https://www.navicat.com/company/aboutus/blog/2054-what-s-new-in-navicat-16-1-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>What's New in Navicat 16.1 - Part 3</title></head><body><b>Jul 21, 2022</b> by Navicat<br/><br/><h1 class="blog-sub-title" style="font-size: 24px;">Charts</h1><h1 class="blog-sub-title">Trend Line supported</h1><p style="margin: 15px 0;">Starting with Navicat 16.1, trend line can be added in Vertical/Horizontal Bar Chart, Line Chart, Area Chart, Bar and Line Chart, and Scatter Chart. To make your chart even more understandable and easily interpreted, you can change the default appearance of a trendline.For this, just simply click on Trend Line tab in Properties pane.</p><img src="/link/Blog/Image/2022/20220722/Trend_Line.png" style="max-width: 100%;" /><h1 class="blog-sub-title">Better properties pane</h1><p>The Properties pane allows you to customize your charts. Before, all properties were displayed under one panel only. Now, properties are categorized into several tabs - General, Data, Axis and Trend Line. Each tab regroups the properties and settings, ensuring that you can easily look for the right properties settings when you need them.</p><img src="/link/Blog/Image/2022/20220722/Charts_Properties.png" style="max-width: 100%;" /><h1 class="blog-sub-title">Support for additional month format</h1><p>Weve added the ability to abbreviate the name of the month to one-letter. For example, January appears as J.</p><img src="/link/Blog/Image/2022/20220722/Charts_Month.png" style="max-width: 100%;" /><h1 class="blog-sub-title">Changing chart color</h1><p>Previously, it was time-consuming to look for the color settings in Properties pane. Its now possible to change the chart color on right-click context menu.</p><img src="/link/Blog/Image/2022/20220722/Charts_Color.png" style="max-width: 100%;" /><h1 class="blog-sub-title">Other improvements</h1><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px; margin-bottom: 15px;"><li>Weve enhanced our filter function which now dividedinto two filters. The Data Filter is for filtering the source data in the current chart; The Display Filter is applying on the display data.</li><li>Added smoothed line in Line Chart.</li><li>Added "Restore Defaults (Style Only)" on each Properties tab for restoring the chart style only.</li><li>Added hint when resizing/moving objects in dashboard.</li></ul></body></html>]]></description>
</item>
<item>
<title>What's New in Navicat 16.1 - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/2050-what-s-new-in-navicat-16-1-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>What's New in Navicat 16.1 - Part 2</title></head><body><b>Jul 21, 2022</b> by Navicat<br/><br/><h1 class="blog-sub-title" style="font-size: 24px;">Data Viewer</h1><h1 class="blog-sub-title">Better view for your data</h1><p style="margin: 15px 0;">We think you will particularly appreciate the improvement for highlighting the selected row and active cell. You can now easier to see your current place in Grid View.</p><img src="/link/Blog/Image/2022/202207212/View_Highlight.png" style="max-width: 100%;" /><p style="margin: 15px 0;">Weve improved the data alignment in Grid View. Previously,this made it difficult to view data between number column and text column, but now Navicat handles this situation well.</p><img src="/link/Blog/Image/2022/202207212/Column_Space.png" style="max-width: 100%;" /><h1 class="blog-sub-title">Show and hide Columns</h1><p>Weve made it easier to show/hide columns  adding a Columns button on the toolbar for quick access.</p><img src="/link/Blog/Image/2022/202207212/Column.png" style="max-width: 100%;" /><h1 class="blog-sub-title" style="font-size: 24px; margin-top: 100px;">Table Designer</h1><p>Heres what we improved for Table Designer:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px; margin-bottom: 15px;"><li>Added column CHECK support for MariaDB 10.2.1 or above.</li><li>Added support for PostgreSQL partitioned hash.</li></ul><h1 class="blog-sub-title" style="font-size: 24px; margin-top: 100px;">Query</h1><h1 class="blog-sub-title">Completion in MongoDB</h1><p>Logical operator is now available in code completion for MongoDB.</p><img src="/link/Blog/Image/2022/202207212/Completion_MongoDB.png" style="max-width: 100%;" /><h1 class="blog-sub-title">Copy code snippets to cloud</h1><p>When writing SQL, you might sometimes insert reusable code into editor. If you want to share your own snippet in cloud, use a new action, Copy Snippet To, under the Code Snippet pane.</p><img src="/link/Blog/Image/2022/202207212/Snippets.png" style="max-width: 100%;" /></body></html>]]></description>
</item>
<item>
<title>What's New in Navicat 16.1 - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/2049-what-s-new-in-navicat-16-1-part-1.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>What's New in Navicat 16.1 - Part 1</title></head><body><b>Jul 21, 2022</b> by Navicat<br/><br/><h1 class="blog-sub-title" style="font-size: 24px;">General</h1><h1 class="blog-sub-title">Connection tree</h1><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px; margin-bottom: 15px;"><li>Pressing Ctrl+F in connection tree will now open the Search box in tree. It activated another Search box before. </li><li>Adding shortcut in submenu to have quick access to create a new connection profile.</li></ul><img src="/link/Blog/Image/2022/20220721/New_Connection_Profile.png" style="max-width: 100%;" /><h1 class="blog-sub-title">Information Pane</h1><p>The behavior of the Information Pane is now more straightforward:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px; margin-bottom: 15px;"><li>Highlighting and copying the object title directly.</li><li>Adding the Copy button - make it easierto copy the value to clipboard.</li><li>Showing the related information that you require. Before, SSH/HTTP still displayed even they werent being set. </li><li>Finding in DDL tab - if you are looking for a particular information in DDL, you can use text search for help!</li></ul><div class="row"><div class="col-xs-5"><img src="/link/Blog/Image/2022/20220721/Information_Pane_1.png" style="max-width: 100%;" /></div><div class="col-xs-7"><img src="/link/Blog/Image/2022/20220721/Information_Pane_2.png" style="max-width: 100%;" /></div></div><h1 class="blog-sub-title">Open currenttabinnew window</h1><p>You can now right-click on a tab and open it to new window.</p><img src="/link/Blog/Image/2022/20220721/Move_Tab.png" style="max-width: 100%;" /><h1 class="blog-sub-title" style="font-size: 24px; margin-top: 100px;">Connection</h1><h1 class="blog-sub-title">Support OceanBase Community Edition</h1><p>We announce that OceanBase has been added in our Navicat family. Together with our efficient technical support, Navicat is exactly what you need.</p><img src="/link/Blog/Image/2022/20220721/OceanBase.png" style="max-width: 100%;" /><h1 class="blog-sub-title">Upgrade for MongoDB Driver</h1><p>The following points have been improved for MongoDB connection:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px; margin-bottom: 15px;"><li>MongoDB driver upgraded from 1.16.2 to 1.21.1.</li><li>Support MongoDB Atlas Serverless.</li><li>You can now choose either driver 1.21.1 (Default) or 1.16.2 (Legacy).</li></ul><img src="/link/Blog/Image/2022/20220721/MongoDB.png" style="max-width: 100%;" /><h1 class="blog-sub-title">Show Hidden Passwords Behind Dots</h1><p>We hide the password field with dots for security purposes. You can now click on the eye symbol to reveal the masked password.</p><img src="/link/Blog/Image/2022/20220721/Password.png" style="max-width: 100%;" /></body></html>]]></description>
</item>
<item>
<title>Find Customers Who Have Purchased More Than n Items Within a Given Timeframe</title>
<link>https://www.navicat.com/company/aboutus/blog/2048-find-customers-who-have-purchased-more-than-n-items-within-a-given-timeframe.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Find Customers Who Have Purchased More Than <i>n</i> Items Within a Given Timeframe</title></head><body><b>Jul 18, 2022</b> by Robert Gravelle<br/><br/><p>Part of knowing your business is tracking sales metrics such as units sold and identifying your best customers. To that end, you'll probably want to begin with fetching data about customers who've made the most purchases throughout the month, quarter, year, or other time period. This data will allow you to analyze their buying patterns and identify trends. This blog will present a few sample queries to do that by combining the mighty Count() function with the GROUP BY and HAVING clauses.</p><h1 class="blog-sub-title">A Basic Query</h1><p>We'll be executing our queries against the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a>. It's a nicely normalized schema modeling a DVD rental store, featuring things like films, actors, film-actor relationships, and a central inventory table that connects films, stores, and rentals. Hence, its customers are not buying movies, but rather, renting them. Nevertheless, the criteria for selecting the data remains the same, which is to count the rows of the main <i>rental</i> table and group results by <i>customer_id</i>. Here is a basic query in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a> that limits results to those customers who rented more than 20 movies in total:</p><img alt="basic_query (61K)" src="https://www.navicat.com/link/Blog/Image/2022/20220718/basic_query.jpg" height="794" width="382" /><p>That orders results by <i>customer_id</i>.  Later on, we'll sort results by <i>num_of_films_rented</i>.</p><h1 class="blog-sub-title">Fetching Additional Customer Details</h1><p>While the above query is sufficient to identify those customers who rented many movies, it does not provide any customer details other than their IDs.  To include more customer data, we need to join the customer table.  It should be LEFT JOINed so that only customers who have rented movies are joined to the main query.  Here are the results with customer names added:</p><img alt="customer_data (125K)" src="https://www.navicat.com/link/Blog/Image/2022/20220718/customer_data.jpg" height="792" width="441" /><h1 class="blog-sub-title">Filtering Results</h1><p>So far, we've been casting a very wide net, including results for all films and time periods. We could get more specific by targeting films by category as well as time period. To do that, we'll need to add a few more tables.  If you're ever unsure on how to JOIN tables to a query, in Navicat, you can select the tables in the Object pane and run the <i>Reverse Tables to Model...</i> command:</p><img alt="reverse_tables_to_model (68K)" src="https://www.navicat.com/link/Blog/Image/2022/20220718/reverse_tables_to_model.jpg" height="783" width="326" /><p>That will add them to a schema diagram in the Modeling Tool so that you can view their relationships:</p><img alt="schema_diagram (264K)" src="https://www.navicat.com/link/Blog/Image/2022/20220718/schema_diagram.jpg" height="867" width="1202" /><p>In the revised query, we'll limit results to comedies that were rented throughout 2005:</p><img alt="rentals_by_category (107K)" src="https://www.navicat.com/link/Blog/Image/2022/20220718/rentals_by_category.jpg" height="550" width="558" /><p>Notice that the minimum film count was lowered to 5 or more because there are less rentals for a single category.</p><h1 class="blog-sub-title">Sorting By Count</h1><p>Perhaps you'd rather view records by rental counts. All that's required to make that happen is to include an ORDER BY clause.  Here is the final query, sorted by <i>num_of_films_rented</i> in DESCending order, so that the customer who rented the most comedies in 2005 appears at the top of the results:</p><img alt="ordered_by_count (111K)" src="https://www.navicat.com/link/Blog/Image/2022/20220718/ordered_by_count.jpg" height="585" width="562" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to combine the Count() function with the GROUP BY and HAVING clauses to gain valuable insight into our customers' spending habits. As you can imagine, the same query structure can be utilized to discover all sorts of trends and patterns related to product sales and/or rentals. Insights gleaned can be of tremendous benefit in guiding organizational decisions.</p></body></html>]]></description>
</item>
<item>
<title>Selecting Odd or Even Numbered Rows From a Table</title>
<link>https://www.navicat.com/company/aboutus/blog/2043-selecting-odd-or-even-numbered-rows-from-a-table.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Selecting Odd or Even Numbered Rows From a Table</title></head><body><b>Jul 8, 2022</b> by Robert Gravelle<br/><br/><p>Having to select only odd or even rows from a table sounds like something that you'd never have to do, that is until you do. A quick Google search confirms that it's something that is done often enough, but, with few database practitioners knowing how, they invariably turn to online database communities in search of answers. As a reader of this blog, you can save yourself the trouble of scouring database forums for a solution, as we'll set the record straight right here today.</p><h1 class="blog-sub-title">Picking a Suitable Target Column </h1><p>Before we can speak of "even or odd rows" we have to order the rows by the column whose data we're interested in splitting. Ideally, its data should be numeric, unique, and sorted in ascending order.  Hence, auto-increment columns like those of a primary key make perfect candidates.  Otherwise, you may need to write a subquery with an ORDER BY clause and then select from it.</p><p>As an example, let's open the orders table of the classicmodels sample database in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>'s Table Designer. We can see that its PK (the orderNumber column) is not auto-incrementing, as evidenced by the unchecked "Auto Increment" checkbox:</p><img alt="orders_table_design (17K)" src="https://www.navicat.com/link/Blog/Image/2022/20220707/orders_table_design.png" height="398" width="697" /><p>However, opening the table in Grid View shows that orderNumber values are clearly sorted in ascending order:</p><img alt="orders_table (212K)" src="https://www.navicat.com/link/Blog/Image/2022/20220707/orders_table.jpg" height="648" width="690" /><p>Hence, we can write a query directly against the table.</p><h1 class="blog-sub-title">Solutions By Database</h1><p>The simplest way to find the records with odd or even values is to check the remainder when we divide the column value by 2. A remainder of 0 indicates an even number, while an odd number points to an odd number. However, like so many database tasks, how you go about determining the remainder depends on what type of database you're working with. </p><p>In PostgreSQL, MySQL, and Oracle, we can use the MOD() function to check the remainder:</p><p>Here's the general query syntax to find rows where a specified column has even values:</p><pre>SELECT * FROM table_name WHERE mod(column_name,2) = 0;</pre><p>This syntax will find rows where our target column has odd values:</p><pre>SELECT * FROM table_name WHERE mod(column_name,2) <> 0;</pre><p>SQL Server does not have a MOD function.  Instead, it provides the % modulus operator.</p><p>Here's the general query syntax to find rows where a specified column has even values:</p><pre>SELECT *FROM table_name where column_name % 2 = 0;</pre><p>This syntax will find rows where our target column has odd values:</p><pre>SELECT *FROM table_name where column_name % 2 <> 0;</pre><h1 class="blog-sub-title">Some Examples</h1><p>Let's give each of the above statements a try against the orders table of the classicmodels sample database, first in SQL, then in SQL Server.</p><p>First, we'll retrieve even rows:</p><img alt="even_rows_in_mysql (170K)" src="https://www.navicat.com/link/Blog/Image/2022/20220707/even_rows_in_mysql.jpg" height="630" width="696" /><p>Next, we'll fetch odd rows only:</p><img alt="odd_rows_in_mysql (204K)" src="https://www.navicat.com/link/Blog/Image/2022/20220707/odd_rows_in_mysql.jpg" height="737" width="697" /><p>As mentioned previously, SQL Server does not have a MOD function, so well employ the % modulus operator instead.</p><p>Even rows:</p><img alt="even_rows_in_sql_server (215K)" src="https://www.navicat.com/link/Blog/Image/2022/20220707/even_rows_in_sql_server.jpg" height="777" width="690" /><p>Odd rows:</p><img alt="odd_rows_in_sql_server (211K)" src="https://www.navicat.com/link/Blog/Image/2022/20220707/odd_rows_in_sql_server.jpg" height="775" width="695" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog presented an easy way to retrieve odd or even numbered rows from various databases by checking the remainder after dividing the target column value by 2 - a solution that is both simple and effective.</p></body></html>]]></description>
</item>
<item>
<title>A Database Tools Showdown: HeidiSQL versus Navicat - Part 3</title>
<link>https://www.navicat.com/company/aboutus/blog/2042-a-database-tools-showdown-heidisql-versus-navicat-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Database Tools Showdown: HeidiSQL versus Navicat</title></head><body><b>Jun 24, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Supported Platforms and Databases, Plus SQL Editing</h1><p>In this three part series, we've been comparing HeidiSQL, a free database client, with Navicat Premium. So far we've done a quick visual comparison and looked at both tools' pros and cons. In this final instalment, we'll be examining specific features, such as supported platforms and databases, SQL Editing, and more!</p><h1 class="blog-sub-title">Supported Platforms</h1><p>HeidiSQL was built for the Windows platform, and still only works on Windows. On the <a class="default-links" href="https://www.heidisql.com/download.php" target="_blank">download page</a>, there is a 32/64 bit combined (SHA1 checksum) Installer, Portable (zipped) versions for 32 bit and 64 bit, as well as the full source code. You can run HeidiSQL on Wine. Short for "Wine Is Not an Emulator", Wine is a compatibility layer for running Windows applications on several POSIX-compliant operating systems, such as Linux, macOS,and BSD. On Wine, HeidiSQL runs fine on Windows 8 and 10, but has minor issues on both Windows 7 and 11.  Moreover, running HeidiSQL on any Wine release newer than 4.0 is currently quite unstable.</p><p>Navicat Premium is available for Windows (32 and 64 bit), macOS (64 bit), and Linux (64 bit). As such, each version of Navicat is optimized for that O/S. As a commercial product, Navicat comes with customer support. Users can also <a class="default-links" href="https://help.navicat.com/hc/en-us/requests/new" target="_blank">submit a ticket</a> in the rare instance that they encounter a bug, to receive assistance in resolving it and have it fixed in the next patch or minor release. </p><h1 class="blog-sub-title">Supported Databases</h1><p>Initially, HeidiSQL offered support for MySQL and MariaDB, then added MS SQL Server. Now it includes PostgreSQL support as well.</p><p>Meanwhile, Navicat Premium is a Universal Database Tool, which means that it supports all popular databases, including MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, and SQLite. It is also compatible with cloud databases, such as Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud and MongoDB Atlas. </p><h1 class="blog-sub-title">SQL Editing</h1><p>Both Navicat and HeidiSQL's Query Editors are similar in terms of functionality. Both offer auto-completion and customizable Code Snippets that strip the repetition from coding.  Here's a side-by-side comparison with HeidiSQL on the left and Navicat Premium on the right:  </p><img alt="heidisql_vs_navicat_auto_complete (91K)" src="https://www.navicat.com/link/Blog/Image/2022/20220624/heidisql_vs_navicat_auto_complete.jpg" height="297" width="1201" /><p>Each tool includes common SQL statements, functions, code snippets in the right-hand pane. Here they are with HeidiSQL again on the left and Navicat Premium on the right: </p><img alt="heidisql_vs_navicat_query_extras (78K)" src="https://www.navicat.com/link/Blog/Image/2022/20220624/heidisql_vs_navicat_query_extras.jpg" height="564" width="760" /><p>One Navicat tool that is absent in HeidiSQL is the Visual Query Builder. It allows anyone to create and edit queries with only a cursory knowledge of SQL: </p><img alt="Navicat Visual Query Builder" src="https://www.navicat.com/link/Blog/Image/2018/20180103/query%20builder%20with%20tables.jpg" /><p>A Visual Query Builder is a must-have for many users, as is evidenced by a <a class="default-links" href="https://www.heidisql.com/forum.php?t=12029" target="_blank">forum thread</a> about that very subject on the HeidiSQL site. </p><h1 class="blog-sub-title">Conclusion</h1><p>In part 3 of this series on HeidiSQL vs. Navicat Premium, we examined specific features of both products, such as supported platforms and databases, as well as SQL Editing. While both share many similarities, there is no question that Navicat offers a more comprehensive array of tools and features, from the Visual Query Builder, to Navicat Cloud, to a dedicated support team. As we saw when we previously compared DBeaver to Navicat, Navicat Premium perfectly exemplifies the age old adage that you get what you pay for.</p></body></html>]]></description>
</item>
<item>
<title>A Database Tools Showdown: HeidiSQL versus Navicat - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/2041-a-database-tools-showdown-heidisql-versus-navicat-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Database Tools Showdown: HeidiSQL versus Navicat</title></head><body><b>Jun 22, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">General Comparison</h1><p>While HeidiSQL and Navicat Premium share many similarities, they are in fact different product types. Navicat Premium is what's known as a Universal Database Tool, which means that it supports ALL popular databases, including MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, and SQLite. Moreover, Navicat is compatible with cloud databases as well, such as Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud and MongoDB Atlas. HeidiSQL began as a MySQL/MariaDB client and evolved to suport a few additional database types. That being said, the two products are homogenous enough to warrant a side by side comparison.  In this installment we'll be taking a high level inventory of pros and cons, while the next part will focus on specific features.</p><h1 class="blog-sub-title">Visual Interface</h1><p>A quick look at an application's visual interface, or UI/UX, as it's more commonly known, can instantly give us some idea about how easy or difficult the application might be to work with. With that in mind, here's a screen capture of HeidiSQL's Data view:</p><img alt="heidisql_gui (313K)" src="https://www.navicat.com/link/Blog/Image/2022/20220622/heidisql_gui.jpg" height="766" width="979" /><p>There is no question that the HeidiSQL UI is chock full of information. Perhaps a little too much, as some elements run out of room at smaller viewport sizes:</p><img alt="text_wrapping (13K)" src="https://www.navicat.com/link/Blog/Image/2022/20220622/text_wrapping.jpg" height="78" width="230" /><p>Aside from minor glitches, the overall design is excellent; there is a main toolbar for accessing common functionality, and the bottom pane shows all database commands in real time.</p><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat 16</a> saw a lot of changes to the GUI. In fact, it was completely revamped with the goal of improving the usability and accessibility, allowing the user to accomplish complex tasks faster than ever before:</p><img alt="navicat_gui (242K)" src="https://www.navicat.com/link/Blog/Image/2022/20220622/navicat_gui.jpg" height="672" width="962" /><p>Navicat shows the latest database command at the bottom of the screen, and also includes additional table, column, and DDL information in the right pane.</p><h1 class="blog-sub-title">Some Quick Pros and Cons</h1><p>Now, let's run through some pros and cons of each product. First, HeidiSQL:</p><p><strong>Pros:</strong></p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>It's lightweight.</li><li>Connects to multiple servers in one window.</li><li>Free to use  Licensed under GNU GPL.  The source code is also available. </li><li>Available in a portable version.</li><li>Full database user roles and privileges management.</li><li>Write queries with customizable syntax-highlighting and code-completion.</li><li>Data synchronization. HeidiSQL can compare and synchronize your data and structure between local and remote databases.</li><li>SSH tunneling support</li></ul><p><strong>Cons:</strong></p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>Low stability. HeidiSQL is known to have a lot of bugs that result in frequent crashes.</li><li>It's only available for Windows and it doesn't look like a cross-platform is coming anytime soon.</li><li>No built-in debugger included</li><li>Low DPI display. (DPI stands for dots per inch and it determines clarity and crispness to a display.) The author attempted to add high DPI but wound up dropping it.</li></ul><p>And now, Navicat Premium:</p><p><strong>Pros:</strong></p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>It's cross-platform and supports multiple drivers.</li><li>Data and structure synchronization.</li><li>Visual query builder and report builder.</li><li>Excellent import/export capabilities.</li><li>SSH tunneling and SSL (Secure Sockets Layer) support</li><li>Supports many languages, including Polish, Russian, Japanese, Portuguese, Korean, Simplified Chinese, Traditional Chinese, Spanish, French, and English.</li><li>Compatible with other Navicat products, including Navicat Monitor, Navicat Data Modeler, Navicat Report Viewer, and Navicat Data Modeler Essentials.</li></ul><p><strong>Cons:</strong></p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>It's a commercial product. Perhaps a con if you're on a very limited budget and need to work with multiple database types, such as PostgreSQL, SQL Server, or SQLite. In that case, you would have to purchase the Navicat premium package.</li><li>It's somewhat resource-intensive as it requires fairly high memory while running.</li></ul><h1 class="blog-sub-title">Coming Up...</h1><p>In part 3, we'll be examining specific features, such as supported platforms and databases, as well as SQL Editing.</p></body></html>]]></description>
</item>
<item>
<title>A Database Tools Showdown: HeidiSQL versus Navicat - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/2040-a-database-tools-showdown-heidisql-versus-navicat-part-1.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Database Tools Showdown: HeidiSQL versus Navicat</title></head><body><b>Jun 17, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Meet the Contestants</h1><p><img src="https://www.navicat.com/link/Blog/Image/2022/20220617/heidi_vs_navicat_header.png" /></p><p>There is no shortage of either free or commercial relational database clients. Some provide basic functionality, while others offer advanced tools that help professionals achieve their many day-to-day activities in an efficient manner. While there is some correlation between cost and the number of features offered, each product needs to be evaluated on its own merits when deciding which product(s) to use yourself.</p><p>Back in June of last year, we compared <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1728-dbeaver-vs-navicat-a-database-tools-showdown" target="_blank">DBeaver, a popular free tool, to Navicat Premium 15</a>. Now, it's high time that we did it again. In this installment of Database Tools Showdown, we'll be taking a look at another freebie called HeidiSQL, and see how it stacks up against <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>.</p><h1 class="blog-sub-title">Some Product Background</h1><p><strong>First, the challenger:</strong> </p><p>Invented by Ansgar Becker in 2002, his aim was to create software that was easy to learn. Becker chose the name "HeidiSQL" when it was suggested to him by a friend as a tribute to Heidi Klum.  The name also reflected Becker's own nostalgia for Heidi, Girl of the Alps. "Heidi", as it's affectionately known, lets you view and edit both the data and structures of MariaDB, MySQL, Microsoft SQL, PostgreSQL and SQLite. </p><p>HeidiSQL began as a MySQL front-end in 1999 under the project name "MySQL-Front". In 2004, during a period of inactivity, Becker sold the MySQL-Front branding to his business partner, Nils Hoyer.  In April 2006, Becker open-sourced the application on SourceForge, renaming the project "HeidiSQL". Later, he added support for other database servers as follows:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>Microsoft SQL Server support was added in March 2011 for the 7.0 release.</li><li>PostgreSQL  was introduced in March 2014 for the 9.0 release.</li><li>SQLite support was introduced in March 2020 for the 11.0 release.</li></ul><p>Today, HeidiSQL is routinely ranked among the most popular tools for MariaDB and MySQL worldwide. Since the 8.0 release, HeidiSQL offers its GUI in about 22 languages other than English.</p><p><strong>About the reigning champion:</strong></p><p>Navicat Premium is Navicat's flagship product. It's a commercial database development and design tool that allows users to simultaneously connect to multiple local and/or cloud databases from a single application. Navicat Premium was designed to meet the needs of a variety of audiences, from database administrators and programmers to various businesses/companies that serve clients and share information with partners.</p><p>The initial version of Navicat was developed by Mr. Ken Lin in 2001. The main goal of the initial version was to simplify the management of MySQL instances. In 2008, Navicat for MySQL was the winner of the Hong Kong ICT 2008 Award of the Year, Best Business Grand Award and Best Business (Product) Gold Award. Navicat Premium was launched in 2009. It combined all previous Navicat versions into a single product and could connect to all popular database types simultaneously, giving users the ability to perform data migration between different (heterogeneous) database types.</p><h1 class="blog-sub-title">Going Forward</h1><p>Now that we've properly introduced our contestants, the next installment(s) will delve into each tool's feature set, and compare them for usability, performance, user ratings, and more!</p></body></html>]]></description>
</item>
<item>
<title>Exporting MySQL Tables to CSV</title>
<link>https://www.navicat.com/company/aboutus/blog/2027-exporting-mysql-tables-to-csv.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Exporting MySQL Tables to CSV</title></head><body><b>Jun 10, 2022</b> by Robert Gravelle<br/><br/><p>A CSV is a Comma-Separated Values file, which allows data to be saved in a tabular format. It's long been the preferred format for transferring data between databases.  More recently, Internet-driven formats such as XML and JSON have also gained much traction.  CSV files are well suited to databases because they represent table data exceptionally well and can be used with just about any spreadsheet program, such as Microsoft Excel or Google Spreadsheets. In today's blog, we'll be taking a look at a few ways to export table data to CSV in MySQL.</p><h1 class="blog-sub-title">Using the Command Line</h1><p>Most relational databases, MySQL included, provide commands to export and import to and from CSV. </p><p>Make sure that you start your MySQL server instance with the <i>secure-file-priv</i> option.  It sets the directory where MySQL imports and exports data using statements such as LOAD DATA and SELECT INTO FILE. You can see the current setting using the command:</p><pre>SHOW VARIABLES LIKE "secure_file_priv"  </pre><p>All that's left to do now is select the data and specify the location of the output file.  Here's a statement that outputs an entire table:</p><pre>TABLE tableName INTO OUTFILE 'path/outputFile.csv'FIELDS TERMINATED BY ','OPTIONALLY ENCLOSED BY '"'ESCAPED BY ''LINES TERMINATED BY '\n';</pre><p>You can also filter the data as you would in any SELECT query. Here's an example that filters both columns and values:</p><pre>SELECT columnName, ...FROM tableNameWHERE columnName = 'value'LIMIT 1000INTO OUTFILE 'path/outputFile.csv'FIELDS TERMINATED BY ','OPTIONALLY ENCLOSED BY '"'ESCAPED BY ''LINES TERMINATED BY '\n';</pre><p>Want to include column headers? That's easily done using the UNION statement:</p><pre>(SELECT 'columnHeading', ...)UNION(SELECT column, ...FROM tableNameINTO OUTFILE 'path-to-file/outputFile.csv'FIELDS ENCLOSED BY '"' TERMINATED BY ','ESCAPED BY '"'LINES TERMINATED BY '\n')</pre><h1 class="blog-sub-title">Using mysqldump</h1><p>mysqldump is a command line utility provided by MySQL for exporting tables, databases, and entire servers. Moreover, it can also be utilized for backup and recovery. Issue the following command in a command prompt/terminal to export a table:</p><pre>mysqldump -u [username] -p -t -T/path/to/directory [database] [tableName] --fields-terminated-by=,</pre><h1 class="blog-sub-title">Using Navicat's Export Wizard</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat 16 for MySQL</a> comes with a very powerful export (and import) wizard, which can export data in multiple formats, including .xlsx, .json, and .sql. To start the export wizard, select the corresponding table, right-click  > Export Wizard, and select the format:</p><img alt="export_formats (50K)" src="https://www.navicat.com/link/Blog/Image/2022/20220610/export_formats.jpg" height="512" width="642" /><p>You can choose to export one table, the entire database, or anything in between:</p><img alt="select_tables (71K)" src="https://www.navicat.com/link/Blog/Image/2022/20220610/select_tables.jpg" height="512" width="642" /><p>You can also select exactly which fields you want, if you're not interested in all the columns:</p><img alt="select_fields (40K)" src="https://www.navicat.com/link/Blog/Image/2022/20220610/select_fields.jpg" height="512" width="642" /><p>Navicat supports a wealth of options, such as including headers, delimiters, error handlers, and more:</p><img alt="other_options (49K)" src="https://www.navicat.com/link/Blog/Image/2022/20220610/other_options.jpg" height="512" width="642" /><h1 class="blog-sub-title">Conclusion</h1><p>CSV is not the perfect format, and does have limitations.  For example, you cannot save data types or formulas in this format. That being said, CSV is still a very important data transfer format; one that every DBA should be familiar with.</p></body></html>]]></description>
</item>
<item>
<title>How to Test Insert and Update Statements before Executing Them</title>
<link>https://www.navicat.com/company/aboutus/blog/1984-how-to-test-insert-and-update-statements-before-executing-them.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>How to Test Insert and Update Statements before Executing Them</title></head><body><b>Jun 2, 2022</b> by Robert Gravelle<br/><br/><p>In some cases, running a well crafted UPDATE statement in production can save the day. Other times, a botched UPDATE can cause more harm than the initial issue.  You can always execute your Data Manipulation Language (DML) statements on a development or test database, but due to differences in the data, this approach makes determining the statement's effects on the production data a craps shoot at best. </p><p>So what are some options to accurately predict what the result of an INSERT, UPDATE, or DELETE statement will be on production data before running it? Well, that depends on the database vendor and product, at least in part. There are also some solutions that enjoy widespread support.  We'll be taking a look at both options in this blog.</p><h1 class="blog-sub-title">Syntax Check</h1><p>The process of testing your statements can be split into two stages. The first is to verify that the statement is syntactically valid, I.E. it will execute. The next step is to ascertain whether or not it produces the result that you intended. </p><p>One way to validate your syntax is to ask your database (DB) for the query plan. This tells you two things:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Whether there are any syntax errors in the query; if so the query plan command itself will fail.</li><li>How the DB is planning to execute the query, e.g. what indexes it will use.</li></ul><p>In most relational DBs the query plan command is "explain" or "describe", as in:</p><pre>explain update ...;</pre><p>In Navicat's database administration and development tools, you can run the EXPLAIN command with the click of a button.  If the statement fails, you'll get an error message similar to the following:</p><img alt="explain (99K)" src="https://www.navicat.com/link/Blog/Image/2022/20220602/explain.jpg" height="546" width="653" /><p>Otherwise, the query plan will be displayed in a tabular format:</p><img alt="explain_success (65K)" src="https://www.navicat.com/link/Blog/Image/2022/20220602/explain_success.jpg" height="305" width="700" /><h1 class="blog-sub-title">Statement Testing</h1><p>You can parse a statement to see if it's syntactically valid, but that doesn't mean it will produce the correct results. To see what your query will actually do, you've got a few options.</p><h3>Turn Off Autocommit</h3><p>Most relational DBs provide a way to disable autocommit mode so that you must issue the COMMIT statement to store your changes to disk or ROLLBACK to ignore the changes.</p><p>In MySQL the command to disable autocommit mode is:</p><pre>SET autocommit=0OrSET autocommit = OFF</pre><p>In SQL Server, the command is:</p><pre>SET IMPLICIT_TRANSACTIONS OFF</pre><p>With autocommit turned off, you are now ready to give your statement(s) a try, by running it within a transaction:</p><pre>-- 1. start a new transactionSTART TRANSACTION;-- 2. insert a new order for customer 145INSERT INTO orders(orderNumber,                   orderDate,                   requiredDate,                   shippedDate,                   status,                   customerNumber)VALUES(@orderNumber,       '2005-05-31',       '2005-06-10',       '2005-06-11',       'In Process',        145);        -- 3. then, after evaluating the results,--    rollback the changesROLLBACK;</pre><p>That will leave your DB in exactly the same state as it was before you ran your statement.</p><h1 class="blog-sub-title">Convert Your Statement Into a SELECT</h1><p>A decidedly low tech approach to testing DML statements is to convert them to SELECTs. As long as you don't expect them to retrieve the entire database, running them as SELECTs is a good way to see exactly which records will be affected. All you need to do is replace the action word with SELECT:</p><pre>INSERT INTO orders...BECOMESSELECT * FROM ORDERS...</pre><h1 class="blog-sub-title">Conclusion</h1><p>There are few things scarier than executing DML statements in a production environment. Thankfully, there are ways to minimize the risk so that you don't have to cross your fingers or recite the Hail Mary.</p><p>If you'd like to give Navicat 16 a test drive, you can download a 14 day trial <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">here</a>.</p></body></html>]]></description>
</item>
<item>
<title>Nested Joins Explained</title>
<link>https://www.navicat.com/company/aboutus/blog/1948-nested-joins-explained.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Nested Joins Explained</title></head><body><b>May 26, 2022</b> by Robert Gravelle<br/><br/><p>Just when you thought you knew every type of join, here comes another! Perhaps you've heard of nested joins, or even nested-loop query plans, and wondered what they were. Well, wonder no more.  Today's blog will settle the mystery once and for all!</p><h1 class="blog-sub-title">A Case of Terminology</h1><p>In the world of relational databases, there can be many different names for the same thing. Joins are no exception to this rule. In fact, when it comes to Nested Joins, database practitioners vary as per their opinions on them.  Some say that there is no such thing; others are more pragmatic and acknowledge that they are simply an alternative term for multi-table joins. </p><p>In all likelihood, the term came about when referring to nested-loop query plans.  These are often used by the query engine to answer joins. In its crudest form, a nested loop goes something like this:</p><pre>for all the rows in outer table   for all the rows in the inner table     if outer_row and inner row satisfy the join condition       emit the rows   next inner next outer</pre><p>This is the simplest, but also the slowest type of nested-loop. Meanwhile, multi-table nested-loop joins perform even worse because they scale out horrendously as the product of the number of rows in all the tables involved grows.</p><p>A more efficient form of nested loop is nested-loop-over-index:</p><pre>for all the rows that pass the filter from the outer table   use join qualifier from outer table row on index on inner table     if row found using index lookup       emit the rows next outer </pre><h1 class="blog-sub-title">Nested Join Syntax</h1><p>Now that we've established that the term "Nested Joins" simply refers to joins between more than two tables, let's take a quick look at their syntax. </p><p>Typically, when we need to join multiple tables and/or views, we would list them one by one, using this generic format:</p><pre>FROM  Table1 [ join type ] JOIN Table2                 ON condition2 [ join type ] JOIN Table3                 ON condition3 </pre>                <p>But this is not the only way. The official ANSI syntax standard for SQL proposes another valid way to write the above join:</p><pre>FROM  Table1 [ join type ] JOIN Table2 [ join type ] JOIN Table3                 ON condition3                 ON condition2 </pre><p>To make the above join style more human readable, we can add parentheses and indentation to make the meaning clearer:</p><pre>FROM  Table1 [ join type ] JOIN ( Table2                     [ join type ] JOIN Table3                                     ON condition3 )                 ON condition2 </pre>                <p>Now it's easier to see that the join between Table2 and Table3 is specified first and has to be done first, before joining to Table1. This query style also positions the join between Table2 and Table3 in such a way that they appear to be nested. In fact, we could consider the join between Table2 and Table3 to be nested.</p><p>Here is the same query in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> written in both styles:</p><img alt="syntax_comparison (98K)" src="https://www.navicat.com/link/Blog/Image/2022/20220526/syntax_comparison.jpg" height="816" width="465" /><h1 class="blog-sub-title">Conclusion</h1><p>Today's blog shed some light on the terms "Nested Joins" and "Nested-loop Query Plan". So, next time you hear them, realize that they are merely referring to the joining of multiple tables or views, regardless of which syntax is employed in doing so.</p></body></html>]]></description>
</item>
<item>
<title>Benefits of Third-party Database Management Tools</title>
<link>https://www.navicat.com/company/aboutus/blog/1907-benefits-of-third-party-database-management-tools.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Benefits of Third-party Database Management Tools</title></head><body><b>May 19, 2022</b> by Robert Gravelle<br/><br/><p>Having completed our series on Top SQL Query Mistakes last week, it's time to take a page from the Monty Python playbook and move on to something completely different. And that something is why database developers and administrators should consider using third-party database administration tools (DBMT) to fill the gaps left by the major database manufacturers. Regardless of price, all 3rd party DBMT provide functionality that fulfills the needs of the general DBA community by either complimenting or replacing database manufacturers' tool sets. Today's blog will highlight just a few of the benefits provided by 3rd party DBMT. </p><h1 class="blog-sub-title">Heterogeneous DBMS Support</h1><p>It's a fairly rare occurrence to find an IT organization that supports a single DBMS platform these days. Most businesses utilize several different database types - both locally hosted and in the Cloud. For example, my own employer has some local PostgreSQL databases as well as some of Amazon's online DB services. The local DB instances are well suited to development, testing, and certain production uses.  Meanwhile, Amazon database services pair well with other Amazon services, such as AI, batch processing, etc... </p><p>A growing class of third-party tool providers is capitalizing on this DBMS proliferation by providing tools that are purposely designed to work with a variety of database types.  Their value stems from their ability to manage multiple DBMS from a single application interface. Some products, such as <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>, can even connect to multiple heterogeneous DBMS simultaneously, allowing admins to transfer data between databases as easily as copying files on a PC desktop.</p><h1 class="blog-sub-title">Increased Data Security</h1><p>An organization's data is invariably its greatest asset. As such, most businesses place a high premium on security and choose 3rd party DBMT that provide more secure connection options than those offered by individual DB vendors' own administration and development tools. </p><p>One popular feature is SSH tunneling. It's a method of transporting data that employs an encrypted SSH connection. SSH tunnels allow connections made to a local port to be shuttled to a remote machine via a secure channel.</p><p>3rd party DBMT also add value by supporting multiple authentication methods such as PAM authentication for MySQL and MariaDB, Kerberos and X.509 authentication for MongoDB, and GSSAPI authentication for PostgreSQL. High end products like Navicat provide more authentication mechanisms and high-performance environments so you never have to worry about connecting over an insecure network.</p><h1 class="blog-sub-title">Collaboration Support</h1><p>One feature that is almost universally missing from DB vendor administration and development tools is the ability to share queries and such with team mates. Navicat's main collaboration tool is Navicat Cloud. It uses Amazon Simple Storage Service (Amazon S3) to store (256-bit AES) encrypted connection settings, queries, models, snippets, virtual group information, and even chart workspaces. These may be shared as well as synchronized  across all your devices including Windows, macOS, Linux, and iOS. Files stored in Navicat Cloud automatically show up in Navicat so that you can get real-time access at anytime and from anywhere. </p><h1 class="blog-sub-title">Conclusion</h1><p>This blog presented three ways that 3rd party Database Management Tools such as <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> provide value to organizations by either complimenting or replacing database manufacturers' tool sets. In doing so, they can greatly simplify everyday tasks and promote increased productivity.</p></body></html>]]></description>
</item>
<item>
<title>Some Top SQL Query Mistakes - Part 5</title>
<link>https://www.navicat.com/company/aboutus/blog/1904-some-top-sql-query-mistakes-part-5.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Some Top SQL Query Mistakes: Part 5 - Predicate Evaluation Order</title></head><body><b>May 16, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Predicate Evaluation Order</h1>  <p>Just before Part 3 of this series, we took a brief pause to talk about <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/1895-predicates-in-sql" target="_blank">Predicates in SQL</a>, as they factored into mistakes related to Outer Joins. In this final installment of this series on Top SQL Query Mistakes, predicates will once again enter the picture, as we examine how predicate evaluation order can cause seemingly well constructed queries to fail with errors. </p><h1 class="blog-sub-title">A Quick Review of Predicate Processing Order</h1><p>In terms of logical query processing order, queries are executed in the following sequence:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>FROM</li><li>WHERE</li><li>GROUP BY</li><li>HAVING</li><li>SELECT</li></ul><p>Hence, logically speaking, the FROM clause is processed first to define the source data set. Next, the WHERE predicates are applied in order to whittle down the result set, followed by GROUP BY, and so on.</p><p>In practice, predicate evaluation and processing order is far less rigid, as the query optimizer may move expressions in the query in order to produce the most efficient plan for retrieving the data. As a result, a filter in the WHERE clause may not be applied before the next clauses are processed. In fact, a predicate may be applied much later in the physical execution plan than you might expect. </p><p>Another common source of confusion and frustration to database developers is that, unlike with most programming languages, predicates are not always executed from left to right. This means that, if you have a WHERE clause containing the filters "WHERE a=1 AND b=2", there is no guarantee that "a=1" will be evaluated first. In fact, there is no easy way to tell which order filters will be executed simply by looking at the query.</p><h1 class="blog-sub-title">A Practical Example</h1><p>To better understand predicate evaluation order, we'll write a SELECT query against the following <i>accounts</i> table, seen in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat 16</a>'s Table Designer:</p><img alt="accounts_table_design (33K)" src="https://www.navicat.com/link/Blog/Image/2022/20220516/accounts_table_design.jpg" height="140" width="616" /><p>Here is some sample data that we'll be querying against:</p><img alt="accounts_table (24K)" src="https://www.navicat.com/link/Blog/Image/2022/20220516/accounts_table.jpg" height="255" width="333" /><p>In the <i>account_number</i> column, business accounts are assigned a numeric identifier, while personal accounts are given an identifier made up of characters. This is not great table design, as the <i>account_number</i> column should be represented by two different fields, where each account type is given the correct data type and not share the same table. Nonetheless, altering the design is not always possible, so we must deal with the table as is.</p><p>So, with this in mind, let's devise a query to retrieve all business type accounts with an <i>account_number</i> that is greater than 50. The resulting query might look like this one:</p><img alt="query_1 (27K)" src="https://www.navicat.com/link/Blog/Image/2022/20220516/query_1.jpg" height="129" width="431" /><p>In some databases, the query produces an error:</p><pre>Conversion failed when converting the varchar value 'ACFB' to data type int</pre><p>The query will fail any time that the query optimizer decides to prioritize the "CAST(account_number AS UNSIGNED INTEGER) > 50"  predicate over the "account_type LIKE 'Business%'". The safest be for avoiding errors like the one above is to either: </p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>Design the table correctly and avoid storing mixed data in a single column. <p>OR</p></li><li>Use a CASE expression to guarantee that only valid numeric values will be converted to INTEGER data type, like this:<p><img alt="query_2 (43K)" src="https://www.navicat.com/link/Blog/Image/2022/20220516/query_2.jpg" height="247" width="451" /></p></li></ul><h1 class="blog-sub-title">Conclusion</h1><p>In this series on Top SQL Query Mistakes, we explored how seemingly intuitive ways of constructing SQL queries can result in anti-patterns that lead to erroneous results and/or performance degradation. Be especially wary of predicate placement and evaluation order as these contribute to many unexpected issues.</p></body></html>]]></description>
</item>
<item>
<title>Some Top SQL Query Mistakes - Part 4</title>
<link>https://www.navicat.com/company/aboutus/blog/1898-some-top-sql-query-mistakes-part-4.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Some Top SQL Query Mistakes: Part 4 - Breaking Subqueries </title></head><body><b>May 11, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Breaking Subqueries</h1><p>In this series on Top SQL Query Mistakes, we've seen several examples of SQL queries that look perfectly solid on first inspection, but can lead to erroneous results and/or performance degradation. Last week, learned how the placement of predicates can adversely affect query execution - particularly in outer joins. Today's installment will focus on subqueries, and how they can break an SQL statement when changes are made to any of its underlying tables.</p><h1 class="blog-sub-title">Single vs. Multiple Value Subqueries</h1><p>Even before we compare single and multiple value subqueries, we should briefly cover what a subquery is. A subquery is a complete SQL query that is nested inside a larger query.  A subquery may be placed in the SELECT, FROM, and WHERE clauses.</p><p>Now that we know what a subquery is and where it can go in a query, it should be noted that, like any SELECT query, a subquery may return one or more rows. This distinction is quite important, because it affects how you would write your query statement. For example, here's a query against the Sakila Sample Database in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a> that fetches all of the actors who appeared in the film "ALONE TRIP":</p><img alt="subquery_single_row (98K)" src="https://www.navicat.com/link/Blog/Image/2022/20220511/subquery_single_row.jpg" height="374" width="664" /><p>Since there should only be one film named "ALONE TRIP", we can use the equals (=) operator to match the film_ids against.</p><p>Contrast the above query to the following one:</p><img alt="subquery_multiple_rows (46K)" src="https://www.navicat.com/link/Blog/Image/2022/20220511/subquery_multiple_rows.jpg" height="267" width="553" /><p>In this case, the subquery selects all of the actors who appeared in the movie. Naturally, this subquery would return multiple rows.  In that case, we should employ the IN() function to match <i>actor_id</i>s against.</p><h1 class="blog-sub-title">How Single Row Subqueries Break</h1><p>As mentioned earlier, a subquery can be placed in the SELECT clause to fetch a column that is in some way correlated to the main query table. For example, consider these two related products and factories tables, shown in the <a class="default-links" href="https://www.navicat.com/en/products/navicat-data-modeler" target="_blank">Navicat Data Modeler</a>:</p><img alt="products_factories_diagram (22K)" src="https://www.navicat.com/link/Blog/Image/2022/20220511/products_factories_diagram.jpg" height="137" width="416" /><p>The products and factories tables are linked using the common <i>sku</i> field.</p><p>Now, let's write a query to extract the <i>factory_id</i> for each product. One way to do that would be to write the query using correlated subquery to retrieve the product <i>factory_id</i>:</p><img alt="product_query (31K)" src="https://www.navicat.com/link/Blog/Image/2022/20220511/product_query.jpg" height="259" width="392" /><p>Note that the point here is to illustrate a technique; there are more efficient ways to retrieve the same information. In any event, we do get the correct result set, and all is well.</p><p>The query will continue to work perfectly well until the day arrives that the company decides to build a new factory as sales increase:</p><img alt="new_factory (11K)" src="https://www.navicat.com/link/Blog/Image/2022/20220511/new_factory.jpg" height="120" width="389" /><p>The extra row in the factories table causes our query to generate an error now:</p><img alt="error_message (49K)" src="https://www.navicat.com/link/Blog/Image/2022/20220511/error_message.jpg" height="281" width="654" /><p>The error is telling us that the outer query expected a scalar value, but our subquery returned a result set. We can fix the issue and list all factories that manufacture each product by using a JOIN:</p><img alt="query_with_join (30K)" src="https://www.navicat.com/link/Blog/Image/2022/20220511/query_with_join.jpg" height="265" width="392" /><h1 class="blog-sub-title">One More Thing...</h1><p>Be aware that the same error can occur in any clause where a column or expression is tested against a subquery, for example "column = (SELECT value FROM Table)". In that case, the solution is to use the IN() function instead of the equality (=) operator.</p></body></html>]]></description>
</item>
<item>
<title>Some Top SQL Query Mistakes - Part 3</title>
<link>https://www.navicat.com/company/aboutus/blog/1897-some-top-sql-query-mistakes-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Some Top SQL Query Mistakes - Part 3</title></head><body><b>May 6, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Outer Joins and Cartesian Products</h1><p>In this series on Top SQL Query Mistakes, we've been exploring how seemingly intuitive ways of constructing SQL queries can result in anti-patterns that lead to erroneous results and/or performance degradation. Last week, we took a break from the series to talk about Predicates in SQL. In this installment, we'll be learning how their placement can adversely affect query execution - particularly in outer joins.</p><h1 class="blog-sub-title">What Are Outer Joins?</h1><p>There are four basic join types employed in linking related tables and views: inner, left, right, and outer. With an inner join, rows from either table that are unmatched in the other table are not returned. In an outer join, unmatched rows in one or both tables can be returned. The last three  join types are all instances of outer joins, whereby:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>LEFT JOIN returns only unmatched rows from the left table.</li><li>RIGHT JOIN returns only unmatched rows from the right table.</li><li>FULL OUTER JOIN returns unmatched rows from both tables.</li></ul><h1 class="blog-sub-title">How Outer Joins Go Wrong</h1><p>While outer joins certainly have their place in the database practitioner's arsenal, developers have a tendency of using them even in situations where they are not needed. Moreover, an outer join query can produce completely different results depending on how you construct it, and where you place predicates in your query. To illustrate, let's look at an example.</p><p>We would like to retrieve a list of ALL customers (whether they placed any orders or not), along with the total number of orders that they placed since the beginning of June, 2005. Do do so, we would employ an outer join to link the <i>customers</i> and <i>orders</i> tables as follows:</p><pre>SELECT C.customerName, count(O.customerNumber) AS 2005_ordersFROM customers AS CLEFT OUTER JOIN orders AS O  ON C.customerNumber = O.customerNumberWHERE O.orderDate >= '2005-05-01'GROUP BY C.customerNameORDER BY 2005_orders DESC;</pre><p> The result should contain every possible combination of rows from the first and second table, also known as a Cartesian product. Unfortunately, when we run the query in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>, only 13 rows are returned, even though there are 122 unique customers in the table (not shown):</p><img alt="customer_orders_bad (74K)" src="https://www.navicat.com/link/Blog/Image/2022/20220506/customer_orders_bad.jpg" height="469" width="511" /><p>To understand where we went wrong, let's rebuild the query one step at a time, starting with only the columns and outer join:</p><img alt="outer_join_without_where_clause (121K)" src="https://www.navicat.com/link/Blog/Image/2022/20220506/outer_join_without_where_clause.jpg" height="757" width="484" /><p>Now we are getting all of the customers.  Those who have not placed any orders have NULL <i>customerNumbers</i>, since they are coming from the <i>orders</i> table.</p><p>Now, let's apply the WHERE clause predicate:</p><img alt="outer_join_with_where_clause (90K)" src="https://www.navicat.com/link/Blog/Image/2022/20220506/outer_join_with_where_clause.jpg" height="557" width="379" /><p>All of sudden, we've lost many customers! The problem is that the predicate in the WHERE clause turned the outer join into an inner join.</p><p>To correct the issue, we need to add the WHERE predicate to the join condition:</p><img alt="outer_join_with_date (114K)" src="https://www.navicat.com/link/Blog/Image/2022/20220506/outer_join_with_date.jpg" height="763" width="370" /><p>We can now adjust our original query to fetch all customers:</p><img alt="customer_orders_good (89K)" src="https://www.navicat.com/link/Blog/Image/2022/20220506/customer_orders_good.jpg" height="578" width="511" /><h1 class="blog-sub-title">The Moral of the Story</h1><p>Always be careful of where you filter out rows. In the above example, the WHERE clause was the issue; in a more complex example, where multiple joins occur, the incorrect filtering may happen on a subsequent table operator (like join to another table) instead in the WHERE clause.</p></body></html>]]></description>
</item>
<item>
<title>Predicates in SQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1895-predicates-in-sql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Predicates in SQL</title></head><body><b>May 3, 2022</b> by Robert Gravelle<br/><br/><p>This week, we're going to briefly hit the Pause button from the Some Top SQL Query Mistakes series in order to talk about Predicates in SQL. The reason is that Predicates will factor into Part 3 of the Top SQL Query Mistakes series. </p><h1 class="blog-sub-title">What Is a Predicate?</h1><p>A predicate is simply an expression that evaluates to TRUE, FALSE, or UNKNOWN. Predicates are typically employed in the search condition of WHERE and HAVING clauses, the join conditions of FROM clauses, as well as any other part of a query where a boolean value is required.</p><p>There are numerous types of predicates, including:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>Comparison</li><li>LIKE </li><li>BETWEEN </li><li>IN</li><li>EXISTS </li><li>IS NULL (/INTEGER/DECIMAL/FLOAT...)</li></ul><p>In the remainder of this article, we'll examine a few examples of the above Predicate types.</p><h1 class="blog-sub-title">Comparison Predicates</h1><p>Any time that we use a comparison operator in an expression, such as <code>WHERE employee_salary > 100000</code>, we are constructing a Predicate that evaluates to TRUE, FALSE, or UNKNOWN. Comparison operators include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>=Equal to</li><li>>Greater than</li><li>&lt; Less than</li><li>>= Greater than or equal to</li><li>&lt;= Less than or equal to</li><li>&lt;&gt; Not equal to</li></ul><p>Hence, a comparison predicate takes the form of:</p><pre>expression_1 comparison_operator expression_2</pre><p>In a comparison predicate, expression2 can also be a subquery. If the subquery does not return any rows, the comparison predicate evaluates to FALSE.</p><h1 class="blog-sub-title">LIKE Predicate</h1><p>IN SQL, the number one pattern-matching predicate is the LIKE operator, as it compares column values with a specified pattern.  Like works with any character or date data type. Here's an example:</p><img alt="like_example (83K)" src="https://www.navicat.com/link/Blog/Image/2022/20220429/like_example.jpg" height="360" width="681" /><h1 class="blog-sub-title">BETWEEN Predicate</h1><p>The BETWEEN operator specifies a range, which determines the lower and upper bounds of qualifying values. For instance, in the Predicate <code>income BETWEEN 5000 AND 20000</code> the selected data is a range of greater then or equal to 5000 and less then or equal to 20000. The Between operator can be used with numeric, text and date data types. Here's an example:</p><img alt="between_example (46K)" src="https://www.navicat.com/link/Blog/Image/2022/20220429/between_example.jpg" height="238" width="659" /><h1 class="blog-sub-title">IN Predicate</h1><p>An IN operator allows the specification of two or more expressions to be used for a query search. The result of the condition is TRUE if the value of the corresponding column equals one of the expressions specified by the IN predicate:</p><img alt="in_example (53K)" src="https://www.navicat.com/link/Blog/Image/2022/20220429/in_example.jpg" height="274" width="601" /><h1 class="blog-sub-title">EXISTS Predicate</h1><p>The EXISTS Predicate accepts a subquery as an argument.  It returns TRUE if the subquery returns one or more rows, and returns FALSE if it returns zero rows.</p><p>Here's an example:</p><img alt="exists_example (45K)" src="https://www.navicat.com/link/Blog/Image/2022/20220429/exists_example.jpg" height="532" width="331" /><h1 class="blog-sub-title">IS NULL Predicate</h1><p>Use IS NULL to determine whether an expression is null, because you cannot test for null by using the = comparison operator. When applied to row value expressions, all elements must test the same.</p><p>The IS NULL predicate takes the following form:</p><pre>IS [NOT] NULL</pre><p>For example, the expression <code>x IS NULL</code> is TRUE if x is a null.</p><p>IS UNKNOWN is a synonym for IS NULL when the expression is of the BOOLEAN type.</p><p>Here's a query that uses the IS NOT NULL Predicate to fetch all actors whose last name is a non-NULL value:</p><img alt="is_not_null_example (24K)" src="https://www.navicat.com/link/Blog/Image/2022/20220429/is_not_null_example.jpg" height="207" width="394" /><h1 class="blog-sub-title">Conclusion</h1><p>In this blog, we interrupted the regularly scheduled blog to bring you this important lesson on SQL Predicates. Typically employed in the search condition of WHERE and HAVING clauses, the join conditions of FROM clauses, Predicates are expressions that evaluate to TRUE, FALSE, or UNKNOWN. We'll be seeing Predicates again in next weeks continuation of the Top SQL Query Mistakes series.</p></body></html>]]></description>
</item>
<item>
<title>Some Top SQL Query Mistakes - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/1894-some-top-sql-query-mistakes-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Some Top SQL Query Mistakes: Part 2 - Non-SARGable Query Conditions</title></head><body><b>Apr 26, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 2: Non-SARGable Query Conditions</h1><p>Like most programmers, database developers tend to write code that is more or less a direct translation of a given request. The fact that most programming languages - SQL included - are designed to be human readable, also contributes to this problem. Why is this a concern? All programming languages execute certain operations faster than others. In relational databases, the query optimizer analyzes SQL queries and determines the efficient execution mechanisms called query plans. The optimizer generates one or more query plans for each query, each of which represent one possible way to run a query. The most efficient query plan is then selected and utilized to run the query.  As it turns out, SQL that mimics the language of a request is seldom the most efficient approach.</p><p>In this installment of the Top SQL Query Mistakes series, we'll explore one example of a poorly written SQL statement and rewrite it in a way that increases efficiency.</p><h1 class="blog-sub-title">Passing Indexed Columns to Functions</h1><p>One <i>faux pas</i> that comes up over and over again in database developers' code is the passing of index columns to functions. To illustrate, let's execute a query against this table, which has an index on the varchar <i>customerName</i> column:</p><img alt="customerName_index (95K)" src="https://www.navicat.com/link/Blog/Image/2022/20220426/customerName_index.jpg" height="432" width="812" /><p>When asked to retrieve all customers whose name starts with the letter "R", one might be inclined to use the LEFT() function to return the first character of the <i>customerName</i> column:</p><img alt="left_query (49K)" src="https://www.navicat.com/link/Blog/Image/2022/20220426/left_query.jpg" height="331" width="625" /><p>Unfortunately, by passing the indexed <i>customerName</i> column to a function, the query engine must evaluate its result for every row in the table! </p><h3>SARGable vs. Non-SARGable Queries</h3><p>In relational databases, there is a term that is derived from a contraction of Search ARGument ABLE, aka, SARGable. A condition (or predicate) in a query is said to be SARGable if the DBMS engine can take advantage of an index to speed up the execution of the query. On the other side of the coin, a query that fails to be SARGable is known as a non-SARGable query.  The effect is similar to searching for a specific term in a book that has no index, beginning at page one each time, instead of jumping to a list of specific pages identified in an index.  Obviously, this has a negative effect on query time, so one of the steps in query optimization is to convert such conditions to be SARGable. </p><p>To make a condition such as the one above into a SARGable one, we need to avoid the use of functions on the indexed columns. To do that, we must express the request with this logically equivalent (and SARGable) query, using the Like operator:</p><img alt="like_query (54K)" src="https://www.navicat.com/link/Blog/Image/2022/20220426/like_query.jpg" height="381" width="511" /><p>Notice the greatly improved execution time.</p><h1 class="blog-sub-title">Conclusion</h1><p>In this second installment on Top SQL Query Mistakes, we learned how non-SARGable query conditions can degrade query performance by forcing the database engine to evaluate every row of a table. The fix is to express the request with a logically equivalent (and SARGable) condition that does not rely on a function call. </p><p>If you'd like to give Navicat 16 for MySQL a test drive, you can download a 14 day trial <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">here</a>.</p></body></html>]]></description>
</item>
<item>
<title>Some Top SQL Query Mistakes - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/1893-some-top-sql-query-mistakes-part-1.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Some Top SQL Query Mistakes - Part 1</title></head><body><b>Apr 11, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">NOT IN Versus NOT EXISTS</h1><p>There's a term the is commonly thrown around in programming circles called "anti-patterns". It refers to a response to a recurring problem that is not only ineffective, but also risks being highly counterproductive.  The term was originally coined in 1995 by computer programmer Andrew Koenig, in his book Design Patterns, as the antithesis of design patterns that are considered to be both reliable and effective.</p><p>Although SQL is not really a programmatic language, it turns out that it's equally susceptible to anti-patterns, especially when the query in question is fairly complex. Sometimes mistakes are hard to spot, and do not reveal themselves until the query is thrust into the pressure cooker of the production environment. </p><p>With the goal of catching SQL mistakes earlier, the next several blogs will be devoted to highlighting some of the most common culprits. We'll be using MySQL to execute today's examples, but the concepts are equally valid in any flavor of SQL.</p><h1 class="blog-sub-title">NOT IN Versus NOT EXISTS</h1><p>One common type of SELECT query retrieves data that is not included in a list of values. To illustrate, here are two very simple tables created in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL 16</a>. The first table contains colors:</p><img alt="colors (24K)" src="https://www.navicat.com/link/Blog/Image/2022/20220411/colors.jpg" height="165" width="506" /><p>The second table contains products:</p><img alt="products (15K)" src="https://www.navicat.com/link/Blog/Image/2022/20220411/products.jpg" height="116" width="317" /><p>What we would like to do is select all of the colors that have not yet been associated to any products. In other words, we need to construct a query that returns only those colors for which there is no product with that color. One might be tempted to employ the NOT IN predicate to fetch the records in question.</p><p>We would expect the following query to return two rows (for "black" and "green") when, in fact, an empty result set is returned:</p><img alt="not_in (27K)" src="https://www.navicat.com/link/Blog/Image/2022/20220411/not_in.jpg" height="237" width="383" /><p>The problem? The present of a NULL value in the <i>color</i> column on the <i>products</i> table, which are translated by the NOT IN predicate to:</p><pre>color NOT IN (Red, Blue, NULL)</pre><p>OR</p><pre>NOT(color=Red OR color=Blue OR color=NULL)</pre><p>The expression "color=NULL" evaluates to UNKNOWN and, what many database developers overlook, is that NOT UNKNOWN also evaluates to UNKNOWN! As a result, all rows are filtered out and the query returns zero rows.</p><p>This issue can also surface if requirements change, and a non-nullable column is updated to allow NULLs. Hence, even if  a column disallows NULLs in the initial design, you should make sure your queries will continue to work correctly with NULLs, should things ever change.</p><p>The simplest solution is to use the EXISTS predicate instead of IN:</p><img alt="not_exists (39K)" src="https://www.navicat.com/link/Blog/Image/2022/20220411/not_exists.jpg" height="342" width="433" /><p>Problem solved!</p><p>So why does this work? Whereas the IN keyword will compare all values in the corresponding subquery column, EXISTS evaluates true or false.  Consequently, using the IN operator, the SQL engine will scan all records fetched from the inner query. On the other hand, if we are using EXISTS, the SQL engine will stop the scanning process as soon as it found a match.</p><h1 class="blog-sub-title">Conclusion</h1><p>In this first installment on Top SQL Query Mistakes, we learned about how anti-patterns can occur in SELECT queries, starting with the erroneous use of the NOT IN predicate.</p><p>If you'd like to give Navicat 16 for MySQL a test drive, you can download a 14 day trial <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">here</a>.</p></body></html>]]></description>
</item>
<item>
<title>Working with Dates and Times in MySQL - Part 5</title>
<link>https://www.navicat.com/company/aboutus/blog/1892-working-with-dates-and-times-in-mysql-part-5.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Working with Dates and Times in MySQL - Part 5</title></head><body><b>Apr 1, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Querying by Date</h1><p>In this final installment in this series on Dates and Times in MySQL, we'll be putting everything we've learned thus far into practice by writing SELECT queries to obtain date-related insights into our data.</p><h1 class="blog-sub-title">Selecting a Date from a Datetime Column</h1><p>One of the first challenges database practitioners encounter when trying to query with dates is that a good deal of temporal data is stored as DateTime and Timestamp data types. For example, the Sakila Sample Database stores the customer table's create_date column as a Datetime: </p><img alt="datetime_column (51K)" src="https://www.navicat.com/link/Blog/Image/2022/20220401/datetime_column.jpg" height="259" width="593" /><p>Hence, if we try to select customer records that were created on a specific date, we can't simply supply a date value:</p><img alt="compare_date_to_datetime (29K)" src="https://www.navicat.com/link/Blog/Image/2022/20220401/compare_date_to_datetime.jpg" height="236" width="382" /><p>One simple workaround is to convert the Datetime values to Dates by using the DATE() function:</p><img alt="select_date_from_datetime (129K)" src="https://www.navicat.com/link/Blog/Image/2022/20220401/select_date_from_datetime.jpg" height="455" width="579" /><p>Now any record whose date matches ours will be returned.</p><h1 class="blog-sub-title">Obtaining the Difference Between Two Dates</h1><p>It is extremely common to perform queries that determine how long ago something happened. In MySQL, the way to do that is to employ the DATEDIFF() function.  It accepts two date values and returns the number of days between them.  Here's a simple example using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL 16</a>:</p><img alt="datediff (27K)" src="https://www.navicat.com/link/Blog/Image/2022/20220401/datediff.jpg" height="239" width="381" /><p>Notice that, in the above example, DATEDIFF() is telling us that the first date is 10 days later than the second one. We can also use an earlier date for the first argument and it will return a negative value:</p><img alt="datediff_past (26K)" src="https://www.navicat.com/link/Blog/Image/2022/20220401/datediff_past.jpg" height="237" width="384" /><h3>Calculating Periods Other than Days</h3><p>For periods other than days, we need to do a little conversion.  For example, we can divide by 7 to obtain the number of weeks between two dates.  Rounding is also employed to show whole weeks in the results:</p><pre>ROUND(DATEDIFF(end_date, start_date)/7, 0) AS weeksout</pre><p>For other time periods, the TIMESTAMPDIFF() function may be of help.  It accepts two TIMESTAMP or DATETIME values (DATE values will auto-convert in MySQL) as well as the unit of time we want to base the difference on. For instance, we can specify MONTH as the unit in the first parameter:</p><pre>SELECT TIMESTAMPDIFF(MONTH, '2012-05-05', '2012-06-04')-- Outputs: 0SELECT TIMESTAMPDIFF(MONTH, '2012-05-05', '2012-06-05')-- Outputs: 1SELECT TIMESTAMPDIFF(MONTH, '2012-05-05', '2012-06-15')-- Outputs: 1SELECT TIMESTAMPDIFF(MONTH, '2012-05-05', '2012-12-16')-- Outputs: 7</pre><h3>A More Complex Example</h3><p>Once you've got the hang of the DATEDIFF() function, you can using it in more advanced ways. Case in point, here's a query that uses the DATEDIFF() function to calculate the average number of days that customers keep their film rentals before returning them: </p><img alt="average rental length in days query (90K)" src="https://www.navicat.com/link/Blog/Image/2022/20220401/average%20rental%20length%20in%20days%20query.jpg" height="564" width="563" /><p>To do that, the results of the DATEDIFF() function is passed to the AVG() function and then rounded to 1 decimal place.</p><h1 class="blog-sub-title">Series Conclusion</h1><p>We've covered a lot of ground in this series on Dates and Times, including:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>MySQL's five temporal data types</li> <li>some important date/time-oriented functions</li> <li>how to create dates and times in MySQL</li><li>querying by date</li></ul><p>While there certainly is a lot more to working with temporal data in MySQL, hopefully this series gave you a good head start on your road to MySQL proficiency.</p></body></html>]]></description>
</item>
<item>
<title>Working with Dates and Times in MySQL - Part 4</title>
<link>https://www.navicat.com/company/aboutus/blog/1887-working-with-dates-and-times-in-mysql-part-4.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Working with Dates and Times in MySQL - Part 4</title></head><body><b>Mar 22, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Date/Time Creation</h1><p>In this this series on Dates and Times, we've explored MySQL's five temporal data types, as well as some of its many date/time-oriented functions. In this installment, we'll be covering a few ways to create dates and times in MySQL.</p><h1 class="blog-sub-title">Using the MAKEDATE() Function</h1>  <p>In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1886-working-with-dates-and-times-in-mysql-part-3.html" target="_blank">Part 3</a>, we took a brief look at the MAKEDATE() function. It takes a <i>year</i> and <i>dayofyear</i> and returns the resulting Date value. For instance, MAKEDATE(2021, 200) would return a Date of "2021-07-19". The downside to this function should be readily obvious; it takes some calculation to determine the <i>dayofyear</i> if you have a year, month, and day. In that case,  you can make a DATE by combining MAKEDATE() with DATE_ADD(). MAKEDATE() with a day of <i>1</i> will give you a DATE for the first day of the given year, and then you can add to it the month and day with DATE_ADD(). Here's an example that sets the year and month only:</p><img alt="makedate_and_date_add (35K)" src="https://www.navicat.com/link/Blog/Image/2022/20220322/makedate_and_date_add.jpg" height="263" width="455" /><p>This SELECT statement includes the day as well:</p><img alt="makedate_and_date_add_with_day (44K)" src="https://www.navicat.com/link/Blog/Image/2022/20220322/makedate_and_date_add_with_day.jpg" height="237" width="632" /><h1 class="blog-sub-title">The MAKETIME() Function</h1><p>If you're looking to create a TIME only, MAKETIME() returns a time value calculated from the hour, minute, and second arguments:</p><img alt="maketime (27K)" src="https://www.navicat.com/link/Blog/Image/2022/20220322/maketime.jpg" height="260" width="383" /><p>The second argument can have a fractional part for milliseconds:</p><img alt="maketime_with_fractions (25K)" src="https://www.navicat.com/link/Blog/Image/2022/20220322/maketime_with_fractions.jpg" height="237" width="382" /><h1 class="blog-sub-title">The STR_TO_DATE() Function</h1><p>Another option for creating a DATE, TIME, or DATETIME is to use the STR_TO_DATE() function. It takes a date string and a format string and returns:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>a DATE value if the string contains only date</li><li>a TIME value if the string contains only time </li><li>a DATETIME value if the format string contains both date and time parts</li></ul><p>Moreover, if the date, time, or datetime value extracted from str is invalid, STR_TO_DATE() returns NULL and produces a warning.</p><h3>Some Examples</h3><p>Here are a couple of Dates in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL 16</a>:</p><img alt="str_to_date (47K)" src="https://www.navicat.com/link/Blog/Image/2022/20220322/str_to_date.jpg" height="266" width="541" /><p>Scanning starts at the beginning of str and fails if format is found not to match. Meanwhile, extra characters at the end of str are ignored:</p><img alt="str_to_date_times (57K)" src="https://www.navicat.com/link/Blog/Image/2022/20220322/str_to_date_times.jpg" height="264" width="634" /><p>Unspecified date or time parts have a value of 0, so incompletely specified values in the date/time string produce a result with some or all parts set to 0:</p><img alt="str_to_date_times_with_missing_parts (46K)" src="https://www.navicat.com/link/Blog/Image/2022/20220322/str_to_date_times_with_missing_parts.jpg" height="266" width="515" /><p>For the full list of specifiers that can be used in format, see the <a class="default-links" href="https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_str-to-date" target="_blank">DATE_FORMAT() function description</a> in the official MySQL docs.</p><h1 class="blog-sub-title">Combining the MAKEDATE(), MAKETIME(), AND STR_TO_DATE() Functions</h1><p>If we had two separate DATE and TIME values, we could get a DATETIME value by concatenating the the results of MAKEDATE() and MAKETIME() and then passing the combined string to STR_TO_DATE().  While that might sound like a lot of work, it's really quite simple in practice:</p><img alt="str_to_date_datetime (49K)" src="https://www.navicat.com/link/Blog/Image/2022/20220322/str_to_date_datetime.jpg" height="234" width="757" /><h1 class="blog-sub-title">Conclusion</h1><p>In this installment of the Working with Dates and Times in MySQL series, we covered a few ways to create dates and times in MySQL using some of its specialized date and time functions. In the next installment, we'll look at how to use temporal data in your SELECT queries.</p></body></html>]]></description>
</item>
<item>
<title>Working with Dates and Times in MySQL - Part 3:</title>
<link>https://www.navicat.com/company/aboutus/blog/1886-working-with-dates-and-times-in-mysql-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Working with Dates and Times in MySQL - Part 3</title></head><body><b>Mar 14, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Important Functions</h1><p>In the first two installments of this series on Dates and Times, we covered MySQL's five temporal data types. Now it's time to turn our attention to some of MySQL's many date/time-oriented functions.</p><h1 class="blog-sub-title">Getting the Current Date and Time</h1><p>Back in May of 2021, we covered some of SQL Server's notable Date &amp; Time functions, starting with how to obtain the current date and time. It offers the GETDATE() function for that purpose. MySQL's equivalent function is simply called NOW().  In <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL 16</a>, we can invoke this function without connecting to a database, since we aren't selecting any table columns:</p><img alt="now (26K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/now.jpg" height="265" width="403" /><p>As mentioned in Part2, the TIMESTAMP type is similar to DATETIME, but are generally used to track changes to records. To obtain the current date and time as a TIMESTAMP, we can use the current_timestamp() function.  Here's its output:</p><img alt="get_timestamp (28K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/get_timestamp.jpg" height="264" width="386" /><h1 class="blog-sub-title">Getting the Current Date Without the Time</h1><p>If you only want to get current date in MySQL, you can use either the curdate() or current_date() functions. The system variable current_date also works. In any event, all three give latest date in YYYY-MM-DD format:</p><img alt="curdate_etc (33K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/curdate_etc.jpg" height="264" width="400" /><h1 class="blog-sub-title">Getting the Current Time Only</h1><p>Likewise, we can get the current time in MySQL using the curtime() or current_time() functions, as well as the  current_time system variable. These all give the latest time in HH:MM:SS format:</p><img alt="curtime_etc (32K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/curtime_etc.jpg" height="263" width="401" /><h1 class="blog-sub-title">Parsing Out Individual Date Parts</h1><p>SQL Server offers the versatile DATEPART() function to extract part of a datetime. MySQL provides the equivalent  EXTRACT() function for this purpose. Similar to the SQL Server function, EXTRACT() accepts a <i>part</i> unit and the <i>date</i>:</p><pre>EXTRACT(part FROM date)</pre><p>Here are all of the valid part values:</p><ul>  <li>MICROSECOND</li>  <li>SECOND</li>  <li>MINUTE</li>  <li>HOUR</li>  <li>DAY</li>  <li>WEEK</li>  <li>MONTH</li>  <li>QUARTER</li>  <li>YEAR</li>  <li>SECOND_MICROSECOND</li>  <li>MINUTE_MICROSECOND</li>  <li>MINUTE_SECOND</li>  <li>HOUR_MICROSECOND</li>  <li>HOUR_SECOND</li>  <li>HOUR_MINUTE</li>  <li>DAY_MICROSECOND</li>  <li>DAY_SECOND</li>  <li>DAY_MINUTE</li>  <li>DAY_HOUR</li>  <li>YEAR_MONTH</li></ul><p>Being February at the time of this writing, the following call to EXTRACT() yields a value of "2":</p><img alt="extract_month (27K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/extract_month.jpg" height="263" width="339" /><p>As the following query shows, it is currently 43 minutes past the hour:</p><img alt="extract_minute (28K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/extract_minute.jpg" height="264" width="337" /><h3>Additional Date Parsing Functions</h3><p>Having trouble remembering all of the <i>part</i> units?  That's OK, because MySQL provides separate functions for date and time parsing a well. </p><p>For parsing either the date or time from a datetime value, there are the DATE() and TIME() functions, respectively: </p><img alt="date_and_time (32K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/date_and_time.jpg" height="264" width="384" /><p>To split a date into its constituent parts, we can use the YEAR(), MONTH(), and DAYOFMONTH() (or DAY()) functions:</p><img alt="year_month_day (43K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/year_month_day.jpg" height="265" width="539" /><p>Time portions also get their own function: HOUR(), MINUTE(), and SECOND() respectively:</p><img alt="hour_minute_second (39K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/hour_minute_second.jpg" height="261" width="538" /><h1 class="blog-sub-title">Constructing a Datetime From Separate Parts</h1><p>In MySQL, there are many ways to create a datetime from separate date and time parts, enough to garner their own article. For now, let's look at one way to create a Date. The MAKEDATE() function returns a date given a <i>year</i> and <i>dayofyear</i>. Here's an example:</p><img alt="makedate (28K)" src="https://www.navicat.com/link/Blog/Image/2022/20220314/makedate.jpg" height="264" width="385" /><h1 class="blog-sub-title">Going Forward</h1><p>In this blog, we explored some of MySQL's many date/time-oriented functions. In the next installment, we'll cover some other ways to create dates and times in MySQL.</p></body></html>]]></description>
</item>
<item>
<title>Working with Dates and Times in MySQL - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/1885-working-with-dates-and-times-in-mysql-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Working with Dates and Times in MySQL - Part 2: TIMESTAMP and YEAR Types</title></head><body><b>Mar 4, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">TIMESTAMP and YEAR Types</h1><p>Welcome back to this series on working with dates and times in MySQL.  In the first two installments, we're looking at MySQL's temporal data types. Part 1 covered the DATE, TIME, and DATETIME data types, while this installment will cover the remaining TIMESTAMP and YEAR types.</p><h1 class="blog-sub-title">The TIMESTAMP Type</h1><p>The TIMESTAMP type is similar to DATETIME in MySQL in that both are temporal data types that hold a combination of date and time. This begs the question why have two types for the same information? For starters, timestamps in MySQL are generally used to track changes to records, and are often updated every time the record is changed, whereas datetimes are used to store a specific temporal values. Another way to think about it is that DATETIME represents a date (as found in a calendar) and a time (as seen on a wall clock), while TIMESTAMP represents a well defined point in time. This distinction could be very important if your application handles timezones, as how long ago was '2009-11-01 14:35:00' depends on what timezone you're in. Meanwhile, 1248761460 seconds since '1970-01-01 00:00:00 UTC' always refers to the same point in time. </p><p>In terms of storage, a TIMESTAMP requires 4 bytes while DATETIME requires 5. TIMESTAMP columns store 14 characters, but you can display it in different ways, depending on how you define it. For example, if you define the column as TIMESTAMP(2), only the two-digit year will be displayed (even though the full value is stored).  The advantage to this approach is that, if you later decide to display the full value, you can change the table definition, and the full value will appear.</p> <p>Below is a list of various ways to define a TIMESTAMP, and the resultant display format:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>TIMESTAMP(14): YYYY-MM-DD HH:MM:SS</li><li>TIMESTAMP(12): YY-MM-DD HH:MM:SS</li><li>TIMESTAMP(10): YY-MM-DD HH:MM</li><li>TIMESTAMP(8): YYYY-MM-DD</li><li>TIMESTAMP(6): YY-MM-DD</li><li>TIMESTAMP(4): YY-MM</li><li>TIMESTAMP(2): YY</li></ul><p>In the <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat 16</a> Table Designer, a timestamp's precision may be defined in the Length column:</p><img alt="timestamp_in_table_designer (44K)" src="https://www.navicat.com/link/Blog/Image/2022/20220304/timestamp_in_table_designer.jpg" height="188" width="615" /><p>If no Length is supplied, as in the above example, Navicat displays the full field, as if it was declared as TIMESTAMP(14):</p><img alt="timestamp_display_format (44K)" src="https://www.navicat.com/link/Blog/Image/2022/20220304/timestamp_display_format.jpg" height="278" width="447" /><h1 class="blog-sub-title">The YEAR Type</h1><p>Many DBAs opt to store years as integers. While that can certainly work, it is more efficient to use MySQL's dedicate YEAR type for that purpose, as the YEAR type uses a mere 1 byte. It can be declared as YEAR(2) or YEAR(4) to specify a display width of two or four characters. If no width is given the default is four characters. YEAR(4) and YEAR(2) have different display formats but have the same range of values:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>For 4-digit format, MySQL displays YEAR values in YYYY format, with a range of 1901 to 2155, or 0000.</li><li>For 2-digit format, MySQL displays only the last two (least significant) digits; for example, 70 (1970 or 2070) or 69 (2069).</li></ul><p>Here's an example of a year column in the Navicat Table Designer with a four digit format:</p><img alt="year_in_table_designer (77K)" src="https://www.navicat.com/link/Blog/Image/2022/20220304/year_in_table_designer.jpg" height="346" width="616" /><p>As a result, we see the full year in the table:</p><img alt="year_display_format (89K)" src="https://www.navicat.com/link/Blog/Image/2022/20220304/year_display_format.jpg" height="258" width="780" /><h1 class="blog-sub-title">Conclusion</h1><p>That concludes our exploration of the five MySQL temporal data types. The next installment will cover some useful date and time functions.</p></body></html>]]></description>
</item>
<item>
<title>Working with Dates and Times in MySQL - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/1884-working-with-dates-and-times-in-mysql-part-1.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Working with Dates and Times in MySQL - Part 1</title></head><body><b>Feb 25, 2022</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">DATE, TIME, and DATETIME Types</h1><p>The vast majority of databases store a great deal of "temporal" data. Temporal data is simply data that represents a state in time. An organization may collect temporal data for a variety of reasons, such as to analyze weather patterns and other environmental variables, monitor traffic conditions, study demographic trends, etc. Businesses also routinely need to store temporal data about when orders were placed, stock refilled, staff hired, and a whole host of other information about their day-to-day business.</p>    <p>You may be surprised to learn that relational databases do not store dates and times in the same way. MySQL is especially prescriptive. For instance, It stores date values using the universal yyyy-mm-dd format. This format is fixed and may not be changed.  You may prefer to use a mm-dd-yyyy format, but it is not possible to do so. However, you can use the DATE_FORMAT function to format the date the way you want in the presentation layer, usually an application. In the first two installments on working with Dates and Times in MySQL, we'll be looking at MySQL's temporal data types, starting with DATE, TIME, and DATETIME.<p><h1 class="blog-sub-title">Types At a Glance</h1><p>MySQL provides five types for storing dates and times, some just for dates, others for time, and some that include both. Here's a table that summarizes each type:</p><table border=2 cellspacing=0 cellpadding=5>      <thead>        <tr>          <th>            Type Name          </th>          <th>            Description          </th>        </tr>      </thead>      <tbody>        <tr>          <td>            DATE          </td>          <td>            A date value in <code>YYYY-MM-DD</code> format          </td>        </tr>        <tr>          <td>            TIME          </td>          <td>            A time value in <code>hh:mm:ss</code> format          </td>        </tr>        <tr>          <td>            DATETIME          </td>          <td>            A date and time value in<code>YYYY-MM-DD            hh:mm:ss</code>format          </td>        </tr>        <tr>          <td>            TIMESTAMP          </td>          <td>            A timestamp value in <code>YYYY-MM-DD hh:mm:ss</code>            format          </td>        </tr>        <tr>          <td>            YEAR          </td>          <td>            A year value in <code>YYYY</code> or <code>YY</code>            format          </td>        </tr>      </tbody>    </table><p>The rest of this article will cover the first three types in more detail, while the next one will focus on the other two.</p><h1 class="blog-sub-title">TheDATE Type</h1><p>MySQL uses 3 bytes to store a DATE value. The DATE values range from 1000-01-01 to 9999-12-31. Moreover, when strict mode is disabled, MySQL converts any invalid date e.g., 2015-02-30 to the zero date value 0000-00-00. In <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat 16</a>, you can select the DATE type in the Table Designer from the Types drop-down:</p><img alt="date_column_in_table_designer (159K)" src="https://www.navicat.com/link/Blog/Image/2022/20220225/date_column_in_table_designer.jpg" height="609" width="964" /><meta property="og:image"content="https://www.navicat.com/link/Blog/Image/2022/20220225/date_column_in_table_designer.jpg" /><p>To set a DATE value, you can simply choose it using the calendar control:</p><img alt="calendar (88K)" src="https://www.navicat.com/link/Blog/Image/2022/20220225/calendar.jpg" height="474" width="488" /><p>Of course, you can also insert a DATE using the INSERT statement:</p><img alt="insert_date (25K)" src="https://www.navicat.com/link/Blog/Image/2022/20220225/insert_date.jpg" height="102" width="519" /><h1 class="blog-sub-title">TheTIME Type</h1><p>MySQL uses the 'HH:MM:SS' format for querying and displaying a time value that represents a time of day, which is within 24 hours. To represent a time interval between two events, MySQL uses the 'HHH:MM:SS' format, which is larger than 24 hours.</p><p>Here is the TIME type in the Navicat 16 Types drop-down:</p><img alt="time_column_in_table_designer (79K)" src="https://www.navicat.com/link/Blog/Image/2022/20220225/time_column_in_table_designer.jpg" height="376" width="704" /><p>To set a TIME value, Navicat provides the TIME INPUT control:</p><img alt="time_input_control (11K)" src="https://www.navicat.com/link/Blog/Image/2022/20220225/time_input_control.jpg" height="156" width="258" /><p>Here's an INSERT statement that sets a start and end time:</p><img alt="insert_time (24K)" src="https://www.navicat.com/link/Blog/Image/2022/20220225/insert_time.jpg" height="97" width="511" /><h1 class="blog-sub-title">TheDATETIME Type</h1><p>Quite often, you'll need to store both a date and time. To do that, you can use the MySQL DATETIME type. By default, DATETIME values range from 1000-01-01 00:00:00 to 9999-12-31 23:59:59. When you query data from a DATETIME column, MySQL displays the DATETIME value in the same YYYY-MM-DD HH:MM:SS format.</p><p>A DATETIME value uses 5 bytes for storage. In addition, a DATETIME value can include a trailing fractional second up to microseconds with the format YYYY-MM-DD HH:MM:SS[.fraction], for example, 2015-12-20 10:01:00.999999.</p><p>For inputting DATETIME values, Navicat provides the DATETIME INPUT control, which combines the DATE and TIME controls:</p><img alt="datetime_input_control (63K)" src="https://www.navicat.com/link/Blog/Image/2022/20220225/datetime_input_control.jpg" height="331" width="372" /><p>DATETIME values may be set using a string literal that contains the "T" time portion delineator or by casting to a DATETIME:</p><img alt="insert_datetime (31K)" src="https://www.navicat.com/link/Blog/Image/2022/20220225/insert_datetime.jpg" height="157" width="516" /><h1 class="blog-sub-title">Going Forward</h1><p>Having explored the DATE, TIME, and DATETIME Types, the next installment will cover the remaining two temporal types: TIMESTAMP and YEAR.</p></body></html>]]></description>
</item>
<item>
<title>Some Useful MySQL Numeric Functions</title>
<link>https://www.navicat.com/company/aboutus/blog/1881-some-useful-mysql-numeric-functions.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Some Useful MySQL Numeric Functions</title><meta property="og:image"content="https://www.navicat.com/link/Blog/Image/2022/20220218/min_max.jpg" /></head><body><b>Feb 18, 2022</b> by Robert Gravelle<br/><br/><p>Back in May of 2021, we examined a few of SQL Server's <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1716-important-sql-server-functions-numeric-functions" target="_blank">Important SQL Server Functions</a>. Now, it's time to turn our attention to MySQL to see what it offers us in terms of math and numeric functions. To see how they work in practice, we'll use them in queries that we'll run in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat 16 for MySQL</a>.</p><h1 class="blog-sub-title">Comparing SQL Server and MySQL Functions</h1><p>In the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1716-important-sql-server-functions-numeric-functions" target="_blank">Important SQL Server Functions</a> article, we reviewed the Abs, Round, Ceiling, and Floor functions. As it turns out, not only does MySQL also implement these functions, but with exactly the same names. Perhaps this is not surprising, as these are fundamental numeric functions across database products and programming languages. </p><p>Since the functions are the same in both DBMSes, there's no point in rehashing their use. Instead, we'll forge onwards and explore other useful numeric functions in MySQL.</p><h1 class="blog-sub-title">AVG</h1><p>You probably already know what the Average, or Arithmetic Mean, is. You may even know that it is calculated by adding all of the value within a data set, and then dividing that result by the number of data points in the set.  Hence, if we had five numbers such as 4, 5, 6, 5, 3, we would calculate their average as follows:</p><pre>(4 + 5 + 6 + 5 + 3) / 5 = 4.6</pre><p>Simple enough to do with a small sample, but what happens when you have 10,000 rows? The answer, or course, is to use MySQL's built-in AVG function (identically named in SQL Server, by the way). All we need to do is provide it with a numeric expression and it returns its average value.  Here's its simple syntax:</p><pre>AVG(expression)</pre><p>Most of the time, you'll find yourself passing in a column name whose average you'd like to calculate. For example, here's a query that gives us the average running time of all the films in the Sakila Sample Database:</p><img alt="avg (84K)" src="https://www.navicat.com/link/Blog/Image/2022/20220218/avg.jpg" height="573" width="562" /><p>The GROUP BY breaks up values by the category_id so that averages are based on each Film Type, i.e., "Action", "Drama", etc.</p><p>By passing the results of the AVG function to ROUND, we can omit some of the extra decimal places.</p><h3>A More Complex Example</h3><p>The interesting thing about numeric functions is that they can be used as part of larger calculations. Case in point, here's a query that shows how many film categories in which the average difference between the film replacement cost and the rental rate larger than 17 dollars:</p><img alt="avg_replacement_cost (104K)" src="https://www.navicat.com/link/Blog/Image/2022/20220218/avg_replacement_cost.jpg" height="515" width="564" /><p>To calculate the replacement cost, the average rental rate is subtracted from the average replacement cost.  No need for temporary variables; just subtract the results of one function from the other:</p><pre>( AVG( replacement_cost ) - AVG( rental_rate ) ) AS replace_sub_rental</pre><h1 class="blog-sub-title">MIN/MAX</h1><p>Have you noticed that a lot of numeric functions have three letter names? Not sure why that is, but here are two related functions for calculating the minimum and maximum values of a set. Again, the most typical usage is to pass a column name to the function. The following query selects film details for the first and last rentals, according to the rental_date column. As such, it is passed to both the MIN and MAX functions:</p><img alt="min_max (73K)" src="https://www.navicat.com/link/Blog/Image/2022/20220218/min_max.jpg" height="349" width="621" /><p>Mixing aggregate functions and scalar data can be problematic, so the MIN and MAX rental_dates are fetched within a subquery for comparison to those of each Film table row. </p><h1 class="blog-sub-title">Conclusion</h1><p>This blog presented a few useful numeric functions in MySQL, including AVG, MIN, and MAX using Navicat 16 for MySQL as our database client. Speaking of which, if you'd like to give Navicat 16 for MySQL a test drive, you can download a 14 day trial <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">here</a>.</p></body></html>]]></description>
</item>
<item>
<title>Writing Exclusive OR Conditions in Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/1880-writing-exclusive-or-conditions-in-relational-databases.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Writing Exclusive OR Conditions in Relational Databases</title></head><body><b>Feb 11, 2022</b> by Robert Gravelle<br/><br/><p>One of the key ingredients to writing effective SQL queries is the ability to articulate a wide variety of conditions using SQL syntax. One condition that gives both newbies and experienced database developers pause for thought is the Exclusive OR.  Software programmers tend to be more familiar with the syntax for the Exclusive OR condition, probably because most programming languages support the XOR logical operator, whereas many databases do not. </p><p>In simple terms, the Exclusive OR condition is similar to the regular OR, except that, in the case of the Exclusive OR, only one of the compared operands may be true, and not both. In this blog, we'll learn how to express an Exclusive OR condition for a variety of databases, whether they support the XOR operator, or not.</p><h1 class="blog-sub-title">Using the XOR Operator</h1><p>Some popular relational databases, such as MySQL, support the XOR operator, making writing Exclusive OR conditions fairly trivial.  To illustrate, let's consider a scenario where we need to find customers that reside within a specific city, or whose account was created after a specific date, but not both. More specifically, say that we want to see customers who reside in Lethbridge, Alberta, OR, if they do not reside in Lethbridge, whose account was created after Jan 1st, 2020.</p><p>Here's just such a query, executed against the Sakila Sample Database using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>:</p><img alt="xor_query_1 (191K)" src="https://www.navicat.com/link/Blog/Image/2022/20220211/xor_query_1.jpg" height="770" width="673" /><p>Looking at the results, we can see that the first customer, whose account was created on 2020-07-07, has a store_id of 2, while the rest of the customers, all have a store_id of 1 (the Lethbridge store). </p><p>Meanwhile, if we replace the XOR to a regular OR, we now see customers who shop at store #1 whose accounts were also created after 2020-01-01:</p><img alt="or_query_1 (128K)" src="https://www.navicat.com/link/Blog/Image/2022/20220211/or_query_1.jpg" height="386" width="675" /><p>Allowing both operands to evaluate to TRUE is what differentiates OR from XOR.</p><h1 class="blog-sub-title">Writing an Exclusive OR Condition Where XOR Is Not Supported</h1><p>Luckily, it's not that hard to formulate an Exclusive OR condition without the XOR operator; you just need to think about it a bit more. Mathematically speaking, x XOR y is equal to: </p><pre>(x AND (NOT y)) OR ((NOT x) AND y)</pre><p>We can simplify the above formula to the following for the purposes of SQL writing:</p><pre>(A OR B) AND NOT (A AND B)</pre><p>We'll try out this formula by rewriting the first query for SQL Server. If we try to execute it against that database, we get the following error, which states that SQL Server did not recognize the XOR operator:</p><img alt="error_msg (27K)" src="https://www.navicat.com/link/Blog/Image/2022/20220211/error_msg.jpg" height="141" width="671" /><p>Using the above formula, we can rewrite the XOR condition as:</p><pre>WHERE   (ci.city = 'Lethbridge' OR c.create_date  > '2020-01-01')AND NOT (ci.city = 'Lethbridge' AND c.create_date > '2020-01-01')</pre><p>Here are the results in SQL Server (note that the data in both databases is not identical):</p><img alt="sql_server (224K)" src="https://www.navicat.com/link/Blog/Image/2022/20220211/sql_server.jpg" height="853" width="671" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to articulate an Exclusive OR condition in a variety of databases, both using the XOR operator, and without.</p><p>If you'd like to give Navicat 16 a test drive, you can download a 14 day trial <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">here</a>.</p></body></html>]]></description>
</item>
<item>
<title>Creating a Test Database with Navicat 16</title>
<link>https://www.navicat.com/company/aboutus/blog/1878-creating-a-test-database-with-navicat-16.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Creating a Test Database with Navicat 16</title></head><body><b>Feb 7, 2022</b> by Robert Gravelle<br/><br/><p>Recently, we learned how to <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1821-generating-test-data-in-navicat-16.html" target="_blank">generate test data</a> using Navicat 16's new Data Generation tool.  It can help produce a large volume of complex testing data over multiple related tables, all guided by a multi-step wizard. In today's follow-up, we'll go through the process of creating a MySQL test database - all using Navicat 16.</p><h1 class="blog-sub-title">Duplicating a Production Database</h1><p>For optimal testing, the best approach is to duplicate the structure of your production databases (DBs), while replacing the "real" data with sanitized test values. For the purposes of this tutorial, we'll use an instance of the <a class="default-links" href="https://www.mysqltutorial.org/mysql-sample-database.aspx" target="_blank">MySQL classicmodels Sample Database</a> as the source that is the basis for our test database.  Here it is in Navicat Premium 16: </p><img alt="classicmodels_db (94K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/classicmodels_db.jpg" height="404" width="749" /><p>There is usually more than one way to accomplish a task in Navicat. Duplicating a database is no exception.  There are several ways to do it; here are a couple:</p><h3>Create a New Database</h3><p>Rather than copy the database, we can create a brand new one and then generate the test data for it. To do that:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>In the Navigation pane, right-click your connection and select New Database:<p><img alt="new_database (51K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/new_database.jpg" height="362" width="354" /></p></li><li>Enter the database properties in the pop-up window:<p><img alt="new_database_dialog (26K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/new_database_dialog.jpg" height="392" width="442" /></p><i>Hint: if you aren't sure what Character Set and Collation to use, you can open the Edit Database dialog on the source DB to see their values.</i></li><li>To copy over the table structures without data, simply select all of the tables in the Objects pane and drag them over to the new DB.  A popup menu will appear asking you whether to copy over the Structure and Data or the Structure only:<p><img alt="copy_database_structure (88K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/copy_database_structure.jpg" height="648" width="376" /></p>Choose the latter option.</li></ul><h3>Generate Tables From Model</h3><p>Most organizations maintain model diagrams of their databases. Navicat's Modeling tool can generate database objects from a model (forward engineering) as well as generate a model from an existing DB (reverse engineering). Let's use it now to generate our test tables.</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Follow the two first steps from the last exercise to create the classicmodels_test database.</li><li>Click the Model button on the main toolbar to see available models:<p><img alt="models (34K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/models.jpg" height="215" width="675" /></p><i>Hint: if you don't have a model for your database, you can generate one by right-clicking the database in the Navigation Pane and choosing <i>Reverse Schema to Model</i> from the popup menu.</i></li><li>Open the model in the Modeling tool by selecting the model in the Objects pane and clicking the Design Model button in the Objects pane toolbar:<p><img alt="design_model_button (33K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/design_model_button.jpg" height="190" width="623" /></p></li><li>In the Modeling tool, select File -> Synchronize to Database... from the main menu.</li><li>In the Synchronize to Database dialog, designate classicmodels_test as the target database and click the Compare button:<p><img alt="sync_to_database_dialog (66K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/sync_to_database_dialog.jpg" height="681" width="665" /></p></li><li>Navicat will then determine which objects to create, update, or drop to synchronize both databases. In our case, it will generate all of the necessary tables:<p><img alt="sync_to_database_dialog_compare (86K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/sync_to_database_dialog_compare.jpg" height="870" width="734" /></p></li><li>On the next screen, we can review the SQL statements that will be executed:<p><img alt="sync_to_database_dialog_preview (304K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/sync_to_database_dialog_preview.jpg" height="870" width="734" /></p>Click the Start button to run the script.</li><li>We'll get a detailed progress report as the script runs:<p><img alt="sync_to_database_dialog_message_log (180K)" src="https://www.navicat.com/link/Blog/Image/2022/20220207/sync_to_database_dialog_message_log.jpg" height="870" width="734" /></p></li></ul><p>From there, we only need to follow the steps outlined in the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1821-generating-test-data-in-navicat-16.html" target="_blank">Create a Model from a Database in Navicat</a> blog.</p><h1 class="blog-sub-title">Conclusion</h1><p>Navicat 16 provides numerous options for duplicating an existing database, either on the same server or in a completely different environment.</p><p>Speaking of Navicat 16, if you'd like to give it a test drive, you can download a 14 day trial <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">here</a>.</p></body></html>]]></description>
</item>
<item>
<title>A Virtual Tour of the New Standalone Navicat Charts Creator</title>
<link>https://www.navicat.com/company/aboutus/blog/1876-a-virtual-tour-of-the-new-standalone-navicat-charts-creator.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Virtual Tour of the New Standalone Navicat Charts Creator</title></head><body><b>Jan 21, 2022</b> by Robert Gravelle<br/><br/><p>As mentioned in the recent <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1829-present-your-data-more-effectively-with-navicat-16" target="_blank">Present Your Data More Effectively with Navicat 16</a> blog, Navicat 16 added some new charting features, such as support for more data sources and chart types as well as an increased focus on usability and accessibility. These improvements coincide with the release of the new standalone <a class="default-links" href="https://navicat.com/products/navicat-charts-creator" target="_blank">Navicat Charts Creator</a>. This blog will provide a tour of the Navicat Charts Creator and demonstrate how the Navicat Charts Creator can help you to gain deeper insights from your data. </p><h1 class="blog-sub-title">About Workspaces</h1><p>As the name suggests, a workspace is the place that gathers various resources together to work with as a cohesive unit. In the case of the Navicat Charts Creator, the workspace contains dashboards, charts and data sources. You can create multiple dashboards, charts and data sources within a single workspace.  If you are new to the Navicat Charts Creator you can open the sample workspace and explore its contents:</p><img alt="workspace (80K)" src="https://www.navicat.com/link/Blog/Image/2022/20220121/workspace.jpg" height="492" width="640" /><h3>Creating a Workspace</h3><p>In the main window, click New Workspace.  You're now ready to create data sources, charts and dashboards:</p><img alt="new_workspace (73K)" src="https://www.navicat.com/link/Blog/Image/2022/20220121/new_workspace.jpg" height="681" width="691" /><p>When it's time to save your workspace, you have the option of saving it locally or in the Cloud.  To save locally, you would choose <i>File -> Save</i> from the main menu, while to save to the Cloud, you would select <i>File -> Save to Cloud</i>. From there, you only need to enter the workspace name and choose the project.</p><h1 class="blog-sub-title">Data Sources</h1><p>When building a chart, you will need to specify a data source that will supply the chart data via a dataset. The fields in the dataset can be used to construct a chart.  Data sources can reference tables in your connections or data in files/ODBC sources, as well as select data from tables on different server types. </p><img alt="data_source (145K)" src="https://www.navicat.com/link/Blog/Image/2022/20220121/data_source.jpg" height="726" width="962" /><h3>Creating a Data Source</h3><p>To create a data source:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>In the Workspace window, click the New Data Source button, located directly under the main toolbar.</li><li>Enter the name of the data source and select the desired connections, files or existing data sources.  On the New Data Source dialog, you can choose from three options: Database, File/ODBC, or Connections in Existing Data Sources:<p><img alt="new_data_source (57K)" src="https://www.navicat.com/link/Blog/Image/2022/20220121/new_data_source.jpg" height="712" width="802" /></p></li></ul><p>Upon clicking the OK button, a tab will open where you can edit the data source.  Moreover, if you want to add more connections, you can click the plus (+) symbol at the top of the Connections pane and repeat the above steps.</p><p>To add tables to the Design pane, simply drag and drop them from the Connections pane! Likewise, nodes may also be connected to one another to create joins via drag and drop. Join types may be configured if necessary by clicking on them. To view the table data at any time, click the Preview button.</p><h1 class="blog-sub-title">Charts</h1><p>A chart is what provides a visual representation of the data in your data source. Mapping to a single data source, a chart can display correlations between several fields in the data. Navicat Charts Creator supports no less than 17 chart types! You can even make the chart interactive by adding a control chart. Here's one that allows the user to select order date months:</p><img alt="control_chart (33K)" src="https://www.navicat.com/link/Blog/Image/2022/20220121/control_chart.jpg" height="500" width="571" /><h1 class="blog-sub-title">Conclusion</h1><p>I hope that you enjoyed the tour of the Navicat Charts Creator. This blog really only scratched the surface of what it can do, but we'll be looking at the Navicat Charts Creator's capabilities in more detail over the next several weeks. In the meantime, you can download the Standalone<a class="default-links" href="https://navicat.com/en/download/navicat-charts-creator" target="_blank"> Charts Creator</a> and try it for free for 14 days!</p></body></html>]]></description>
</item>
<item>
<title>Calculating Percentage of Total Rows In SQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1871-calculating-percentage-of-total-rows-in-sql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Calculating Percentage of Total Rows In SQL</title></head><body><b>Jan 14, 2022</b> by Robert Gravelle<br/><br/><p>There are many times where you'll want to see the relative contribution of a row (or group of rows) to the total row count. In other words, what percentage of the total count a row represents. To illustrate, let's take the following table, shown in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a>:</p><img alt="fruits_table (68K)" src="https://www.navicat.com/link/Blog/Image/2022/20220114/fruits_table.jpg" height="402" width="498" /><p>We can easily find out how many orders were received for each type of fruit by combining the count() function with the Group By clause:</p><img alt="fruit_orders_count (38K)" src="https://www.navicat.com/link/Blog/Image/2022/20220114/fruit_orders_count.jpg" height="360" width="334" /><p>So now, how would we view what percentage each fruit's orders contributed to the total number of orders? In fact, there are three standard ways to calculate row percentages in SQL.  They are:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Using OVER() clause</li><li>Using subquery</li><li>Using a Common Table Expression, or CTE</li></ul><p>The rest of this blog will explore each of these in turn.</p><h1 class="blog-sub-title">The OVER() Clause</h1><p>Used predominantly with Window Functions, the OVER clause is used to determine which rows from the query are applied to the function, what order they are evaluated in by that function, and when the function's calculations should restart. </p><p>The OVER clause is the most efficient way to calculate row percentages in SQL, so it should be your first choice if efficiency is a priority for you. Here's the formula to obtain a percentage: </p><pre>count(*) * 100.0 / sum(count(*)) over()</pre><p>Adding the above SQL to our original query produces the following results:</p><img alt="percentage_using_over (59K)" src="https://www.navicat.com/link/Blog/Image/2022/20220114/percentage_using_over.jpg" height="340" width="557" /><p>Looks good, but some rounding wouldn't hurt.  Unfortunately, that's not easily done using the over() clause. Perhaps the next option will be more to your liking.</p><h1 class="blog-sub-title">Using a Subquery</h1><p>Not all databases support the OVER() clause, so the subquery approach can be a very valuable fallback solution. It's sometimes referred to as the "universal solution" since it works in all databases. Another benefit of this approach is that it is also the easiest to incorporate with functions such as Round(). Here is what we'll need to add to our query:</p><pre>count(*) * 100.0 / (select count(*) from &lt;YourTable&gt;)</pre><p>And here is the universal solution in action:</p><img alt="Universal_percentage (61K)" src="https://www.navicat.com/link/Blog/Image/2022/20220114/Universal_percentage.jpg" height="406" width="579" /><h1 class="blog-sub-title">Using a Common Table Expression (CTE)</h1><p>The <i>With common_table_expression</i> clause specifies a temporary named result set, known as a common table expression (CTE). We can then select from the temporary result set to apply more functions to retrieved fields. IN our case, we can apply the sum() function to the counts to obtain the percentages:</p><img alt="percentage_using_cte (70K)" src="https://www.navicat.com/link/Blog/Image/2022/20220114/percentage_using_cte.jpg" height="441" width="602" /><p>Keep in mind that this approach is the least efficient as the CTE basically runs a second query against the results of the inner (initial) one. That being said, there may be times that you'll need to use a CTE to perform additional processing that you couldn't easily do in one go.</p><h1 class="blog-sub-title">Conclusion</h1><p>In this blog, we learned three ways to express the relative contribution of a row (or group of rows) to the total row count as a percentage. Each approach has its own strengths and weaknesses, so you'll have to choose one based on your specific requirements.</p><p>If you'd like to give Navicat 16 a try, you can download a 14-day fully functional FREE trial of Navicat <a class="default-links" href="https://navicat.com/en/download/navicat-premium" target="_blank">here</a>. </p></body></html>]]></description>
</item>
<item>
<title>Navicat 16 Improvements that Maximize Productivity</title>
<link>https://www.navicat.com/company/aboutus/blog/1862-navicat-16-improvements-that-maximize-productivity.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat 16 Improvements that Maximize Productivity</title></head><body><b>Jan 7, 2022</b> by Robert Gravelle<br/><br/><p>Throughout the past several weeks, we've been looking at Navicat 16's new features. While exciting to see, one should not discount the many improvements to Navicat that enhance its already great User Interface (UI) and workflow. Hence, today's blog will focus on improvements whose aim is to maximize performance and productivity.</p><h1 class="blog-sub-title">Connection Profile</h1><p>Keeping with the new work from home paradigm, Navicat 16 lets you configure multiple profiles for out-of-office users who may need to switch to a more secure connection depending on the location of the device they are using. To create a Connection Profile:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>In the connection window, click <img alt="icon_connectionProfile" src="https://www.navicat.com/link/Blog/Image/2022/20220107/icon_connectionProfile.png" height="16" width="16" />.<p><img alt="new_connection (25K)" src="https://www.navicat.com/link/Blog/Image/2022/20220107/new_connection.jpg" height="385" width="372" /></p></li><li>Click + New Connection Profile -> New Profile.</li><li>Enter the name of the profile.</li><li>Enter the connection settings.</li><li>Click OK.</li></ul><p>Then, to switch profiles:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>In the main window, right-click a connection and select Switch Connection Profile.</li><li>Select the profile name.</li></ul><p>You can also set the default active profile in the connection window.</p><h1 class="blog-sub-title">Value Picker</h1><p>In the Table Viewer, you can filter records according to numerous criteria using the Filter tool. In Navicat 16, it has been fully rewritten to embed an intuitive panel for selecting values from a list, or inputting possible values to further limit the data in the exact way you want:</p><img alt="value_picker (69K)" src="https://www.navicat.com/link/Blog/Image/2022/20220107/value_picker.jpg" height="426" width="475" /><p>It even includes a Search field for looking up specific values:</p><img alt="value_picker_search (15K)" src="https://www.navicat.com/link/Blog/Image/2022/20220107/value_picker_search.jpg" height="327" width="302" /><h1 class="blog-sub-title">Field Information</h1><p>Also in the Table Viewer, the right-hand panel now includes field information that gives you a quick view of column characteristics. It can help you to get column information and compare columns with ease.</p><p>Details include:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Data type</li><li>Not null</li><li>Default value</li><li>Comments</li></ul><p>Here's an example using the <i>orders</i> table of the <i>classicmodels</i> sample MySQL database:</p><img alt="field_info (120K)" src="https://www.navicat.com/link/Blog/Image/2022/20220107/field_info.jpg" height="334" width="844" /><h1 class="blog-sub-title">Query Summary</h1><p>Located at the bottom of the SQL Editor, the new query <i>Summary</i> tab shows a detailed summary of each SQL statement. It's an easy-to-read, one page summary of the health and performance of your queries that includes a link to jump over to the potential errors:</p><img alt="query_summary (72K)" src="https://www.navicat.com/link/Blog/Image/2022/20220107/query_summary.jpg" height="368" width="500" /><p>You can also view the full query by clicking on the ellipsis [...] button:</p><img alt="full_query (32K)" src="https://www.navicat.com/link/Blog/Image/2022/20220107/full_query.jpg" height="365" width="408" /><p>The Query Summary tab is especially useful in cases where you have many statements in the editor - especially Data Manipulation Language (DML) statements such as UPDATE, INSERT INTO, DELETE, and CREATE:</p><img alt="Screenshot_Navicat_16_Query_Summary_win (79K)" src="https://www.navicat.com/link/Blog/Image/2022/20220107/Screenshot_Navicat_16_Query_Summary_win.png" height="551" width="672" /><h1 class="blog-sub-title">Conclusion</h1><p>In this blog we explored four improvements in Navicat 16 whose aim is to maximize performance and productivity. These updates are all part of Navicat's commitment to maintaining your databases and ensuring that Navicat products perform at the highest possible level, so that your database administration and development activities are both efficient and fulfilling.</p><p>If you'd like to give Navicat 16 a try, you can download a 14-day fully functional FREE trial of Navicat <a class="default-links" href="https://navicat.com/en/download/navicat-premium" target="_blank">here</a>. </p></body></html>]]></description>
</item>
<item>
<title>Present Your Data More Effectively with Navicat 16</title>
<link>https://www.navicat.com/company/aboutus/blog/1829-present-your-data-more-effectively-with-navicat-16.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Present Your Data More Effectively with Navicat 16</title></head><body><b>Dec 29, 2021</b> by Robert Gravelle<br/><br/><p>With all of the excitement surrounding the release of Navicat 16, other noteworthy developments have been overshadowed somewhat. Perhaps none more so that the new standalone <a class="default-links" href="https://navicat.com/products/navicat-charts-creator" target="_blank">Navicat Charts Creator</a>. Charting has been a part of Navicat products for some time now. Navicat 15 went even further to include data visualization in order to help identify trends, patterns and outliers. Navicat 16 adds even more features, by supporting more data sources and chart types as well as a focus on usability and accessibility.  Suffice to say, Navicat can deliver information and present your findings in dashboard for sharing to a wider audience than ever before. In today's blog, we'll take a quick tour of Navicat 16's new charting tools.</p><h1 class="blog-sub-title">Data Connectors</h1><p>In order to create visualizations of your data, you first need to connect to a data source. Navicat Charts Creator allows you to quickly and securely connect to any data source of your choice via four built-in data connectors. These include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li><p><img style="vertical-align:top;" alt="Relational DB (3K)" src="https://www.navicat.com/link/Blog/Image/2021/20211229/Relational%20DB.jpg" height="94" width="95" /><strong>Relational DB</strong>:For working with relational databases such as MySQL, MariaDB, PostgreSQL, Oracle, SQLite and SQL Server.</p></li><li><p><img style="vertical-align:top" alt="File Types (2K)" src="https://www.navicat.com/link/Blog/Image/2021/20211229/File%20Types.jpg" height="96" width="80" /><strong>File Types</strong>: Allows you to import data from external files such as Excel, Access, or CSV, from data stored on your computer, network, or accessible via an URL.</p></li><li><p><img style="vertical-align:top" alt="odbc (3K)" src="https://www.navicat.com/link/Blog/Image/2021/20211229/odbc.jpg" height="92" width="87" /><strong>ODBC</strong>:For importing data from any ODBC data source including Sybase, Snowflake and DB2.</p></li><li><p><img style="vertical-align:top" alt="Linked File (2K)" src="https://www.navicat.com/link/Blog/Image/2021/20211229/Linked%20File.jpg" height="94" width="79" /><strong>Linked File</strong>:Lets you link your chart to data in data sources to update the chart according to changes in the underlying data.</p></li></ul><h1 class="blog-sub-title">Chart Types</h1><p>It's vitally important to choose the right type of chart so that your presentations convey the message you want to communicate. To that end, Navicat 16 includes a wide array of chart types, ranging from standard to exotic: </p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>Bar Chart</li><li>Line / Area Chart</li><li>Bar / Line Chart</li><li>Pie Chart</li><li>Heatmap / Treemap</li><li>Pivot Table</li><li>Scatter Chart</li><li>Value</li><li>Control</li><li>KPI</li></ul><p> Navicat 16 also introduces the Waterfall, Tornado, and Gauge chart types.  </p><figure>  <figcaption>A Tornado Chart</figcaption>  <img alt="tornado (30K)" src="https://www.navicat.com/link/Blog/Image/2021/20211229/tornado.jpg" height="346" width="599" /></figure><h1 class="blog-sub-title">Dashboards</h1><p>Dashboards combine a collection of widgets to give you an overview of the reports and metrics you care about most. As such, they provide a way to monitor many metrics at once, so you can quickly see correlations between different reports. In Navicat 16, a dashboard shows various topics that you would like to track in one place by displaying a collection of charts in an interactive way. These may be synchronized in order to better demonstrate how the charts are related.  For example, hovering over one chart shows the effect in the other charts:</p><img alt="Related charts" src="https://navicat.com/images/Dashboard_01_Group_Charts.gif" /><p>Each dashboard receives a thumbnail that gives you a visual hint of the types of charts you have, so you can more easily navigate between dashboards:</p><img alt="Dashboard thumbnails" src="https://navicat.com/images/Dashboard_03_PageStyle.png" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog provided a quick tour of Navicat 16's new charting tools. These are available in all Navicat database development tools as well as in the brand new standalone <a class="default-links" href="https://navicat.com/en/download/navicat-charts-creator" target="_blank">Charts Creator</a>.</p><p>If you'd like to give Navicat 16 a try, you can download a 14-day fully functional FREE trial of Navicat <a class="default-links" href="https://navicat.com/en/download/navicat-premium" target="_blank">here</a>. </p></body></html>]]></description>
</item>
<item>
<title>Improved Collaboration in Navicat 16</title>
<link>https://www.navicat.com/company/aboutus/blog/1828-improved-collaboration-in-navicat-16.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Improved Collaboration in Navicat 16</title></head><body><b>Dec 23, 2021</b> by Robert Gravelle<br/><br/><p>When the Navicat team added the <a class="default-links" href="https://www.navicat.com/en/products/navicat-cloud">Navicat Cloud</a> collaboration tool a few years ago, little did anyone know that a global pandemic would make collaboration a vital part of most organizations - especially those who provide any kind of Information technology (IT) related services. Being where we are in the last days of 2021, it should come as no surprise that Navicat has expanded its cloud solutions for <a class="default-links" href="https://www.navicat.com/en/navicat-16-highlights">Navicat 16</a>. Now, Navicat Cloud supports more objects, and Navicat has just introduced an On-Prem Server for businesses working with sensitive data. Today's blog will provide an overview of Navicat 16's improved collaboration features.</p><h1 class="blog-sub-title">Navicat Cloud: More Objects Added</h1><p>In case you weren't already familiar with Navicat Cloud, it provides a cloud service for synchronizing Navicat connections, queries, models and virtual groups from different machines and platforms. Whenever you add a connection to Navicat Cloud, it stores connection settings and queries. You can synchronize model files to Navicat Cloud and create virtual groups as well. All the Navicat Cloud objects are located under different projects for easy access. Besides helping you keep your objects synchronized across devices, Navicat Cloud facilitates project sharing with other Navicat Cloud accounts for collaboration.</p><figure>  <figcaption>Navicat Cloud in Navicat Premium on macOS</figcaption>  <img alt="navicat_cloud (81K)" src="https://www.navicat.com/link/Blog/Image/2021/20211223/navicat_cloud.jpg" height="701" width="380" /></figure><p>In addition to connections, queries, models and virtual groups, Navicat 16 adds Code Snippets and Charts workspaces to Navicat Cloud and the new On-Prem Server. Now you'll be able to save your Code Snippet and Charts workspace files to the cloud and share them across your Navicat products and with team members.</p><h1 class="blog-sub-title">Introducing the Navicat On-Prem Server</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-on-prem-server">Navicat On-Prem Server</a> is an on-premise solution that provides you with the option to host a cloud environment for storing Navicat objects internally at <i>your</i> location. In the Navicat On-Prem environment, you can enjoy complete control over your system and maintain 100% privacy. It is intended for organizations who wish or need to maintain a level of control that the cloud often cannot provide.</p><p>In terms of functionality, Navicat On-Prem Server behaves very much like its Navicat Cloud counterpart. Hence, it can synchronize your connection settings, queries, models, snippets, chart workspaces and virtual group information across all your Windows, macOS and/or Linux devices. Files stored in Navicat On-Prem Server will automatically show up on Navicat Family and Navicat On-Prem Server Portal so that you can get real-time access from anywhere, anytime.</p><p>Navicat On-Prem Server is installed locally on your own servers and behind your firewall. Everything is run on the server with no third-party access. That way, you retain full control of security and data ownership within your environments. Moreover, all changes, configurations and upgrades may be performed at your sole discretion.</p><h1 class="blog-sub-title">Conclusion</h1><p> Today's blog provided an overview of Navicat 16's improved collaboration features. </p><img alt="navicat_products (16K)" src="https://www.navicat.com/link/Blog/Image/2021/20211223/navicat_products.jpg" height="135" width="582" /><p>Navicat Cloud and Navicat On-Prem Server are available on all Navicat products and all platforms including Windows, macOS and Linux.</p></body></html>]]></description>
</item>
<item>
<title>Generating Test Data in Navicat 16</title>
<link>https://www.navicat.com/company/aboutus/blog/1821-generating-test-data-in-navicat-16.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Generating Test Data in Navicat 16</title></head><body> <b>Dec 16, 2021</b> by Robert Gravelle<br/><br/>  <p>The recent Navicat 16 listed some of its most note-worthy features and improvements, including:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>Data Generation</li><li>Charts</li><li>On-Prem Server</li><li>Collaboration</li><li>UI/UX Improvements</li></ul><p>As promised, we'll be exploring these in much more detail throughout the coming weeks. In today's blog, we'll start with the entirely new Data Generation tool. We'll familiarize ourselves with it by going through the process of creating testing data for multiple related tables in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 16</a> for Windows.</p><h1 class="blog-sub-title">Setting Up the Test Database</h1><p>The database that we'll be working with is the chinook sample database for SQLite. You can download it using the following link:</p><p><a class="default-links" href="https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip">Download SQLite sample database</a></p><p>Chinook represents a fictional digital media store, and hence includes tables for artists, albums, media tracks, invoices, and customers.  Here they are in Navicat Premium 16:</p><img alt="sqlite_sample_db (36K)" src="https://www.navicat.com/link/Blog/Image/2021/20211216/sqlite_sample_db.jpg" height="379" width="314" /><h1 class="blog-sub-title">Launching the Wizard</h1><p>The Data Generation tool is located under the Tools item in the main toolbar:</p><img alt="data_generation_menu_command (43K)" src="https://www.navicat.com/link/Blog/Image/2021/20211216/data_generation_menu_command.jpg" height="270" width="401" /><p>The ellipsis at the end of the label (...) tells us that the command will open a dialog or wizard.  In this case, the latter is true.</p><h1 class="blog-sub-title">Selecting a Database</h1><p>The first wizard screen lets us set the database for which to generate the test data. The wizard is smart enough to know that, since we already have an active database connection open, we probably want to generate data for it:</p><img alt="data_generation_wizard_screen1 (63K)" src="https://www.navicat.com/link/Blog/Image/2021/20211216/data_generation_wizard_screen1.jpg" height="601" width="729" /><p>At any stage, you can Save or Load a profile so that you don't have to start over when working with the same database(s). There is also an Options button that opens a dialog where you can configure a few general preferences:</p><img alt="data_generation_wizard_screen1_options (11K)" src="https://www.navicat.com/link/Blog/Image/2021/20211216/data_generation_wizard_screen1_options.jpg" height="224" width="286" /><h1 class="blog-sub-title">Tables Population and Ordering</h1><p>The next screen is where we set which tables and fields to generate data for.  (It goes without saying that you'll want to select empty tables that are based on the real tables that you're testing.) By default, Navicat generates 1000 rows of data, but we can change that value via the Number of Rows to Generate text field:</p><img alt="data_generation_wizard_screen2 (106K)" src="https://www.navicat.com/link/Blog/Image/2021/20211216/data_generation_wizard_screen2.jpg" height="841" width="987" /><p>Navicat will automatically determine which order to follow when generating data, but we can change it on the Table Generation Order dialog:</p><img alt="table_generation_order (15K)" src="https://www.navicat.com/link/Blog/Image/2021/20211216/table_generation_order.jpg" height="327" width="402" /><h1 class="blog-sub-title">Data Previews</h1><p>The next screen will show us a preview of what the generated data will look like for each table that we selected back on the second screen. This will give us the opportunity to manually change values or Regenerate all data for a table:</p><img alt="albums_test_data_preview (87K)" src="https://www.navicat.com/link/Blog/Image/2021/20211216/albums_test_data_preview.jpg" height="884" width="735" /><p>Once we're satisfied with the data we can generate it by clicking the Start button.</p><h1 class="blog-sub-title">Progress Report</h1><p>Navicat provides a complete report of its progress. We can see here that a UNIQUE constraint failed on the artists.ArtistId field.  That happened because that table already contained data!</p><img alt="finished_with_error (51K)" src="https://www.navicat.com/link/Blog/Image/2021/20211216/finished_with_error.jpg" height="409" width="555" /><p>Using the Back button, we can return to a previous screen to fix reported errors and try again.  (This time I selected the test tables)</p><img alt="finished_successfully (87K)" src="https://www.navicat.com/link/Blog/Image/2021/20211216/finished_successfully.jpg" height="597" width="601" /><h1 class="blog-sub-title">Conclusion</h1><p>In this blog we familiarized ourselves with Navicat 16's new Data Generation tool by going through the process of creating testing data for the Chinook Sample Database for SQLite.  </p><p>Interested in trying Navicat 16 for yourself? You can download a 14 day free trial <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">here</a>.</p></body></html>]]></description>
</item>
<item>
<title>Storing Ternary Data In MySQL and PostgreSQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1820-storing-ternary-data-in-mysql-and-postgresql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Storing Ternary Data In MySQL and PostgreSQL</title></head><body><b>Dec 8, 2021</b> by Robert Gravelle<br/><br/><p>In software development, there is a Boolean data type for working with binary states. Hence, it only has two possible states: true and false.  However, there exists a third state that must often be accounted for, and that is one for "none of the above" or simply "other". In relational databases, NULL might seem to be a good candidate for this state, but is not, due to its historical context. Recall from previous blogs that NULL has a very specific meaning in Structured Query Language (SQL) to indicate that a data value does not exist in the database. The NULL value was actually introduced by none other than the creator of the relational database model himself, E. F. Codd.  In SQL, NULL has come to indicate "missing and/or inapplicable information". Seen in this light, NULL can hardly represent a "none of the above" or "other" condition. So then, what is the best way to represent ternary - or three-state - data in relational databases? We will answer that question here today for MySQL and PostgreSQL. Next week we'll cover SQL Server and Oracle.</p><h1 class="blog-sub-title">Introducing Enumerated Types</h1><p>Enumerated Types - also known as Enums - are data types that contain a static, ordered set of values. Enums are ideal for storing things such as the days of the week, user preferences, and any other collection of related data that seldom change. Having enjoyed support in a number of programming languages for decades, some of the biggest relational database players, including MySQL and PostgreSQL, have also introduced the Enum type. Unfortunately, there are a few holdouts, including SQL Server and Oracle, which we'll talk about next week.</p><h1 class="blog-sub-title">Creating and Using Enums in MySQL</h1><p>To get an idea how one would use Enums, let's start with the number one relational database in the world. Yes, I speak of MySQL. As you can see in the following CREATE TABLE statement, designating a column as an Enum type is quite trivial: </p><pre>CREATE TABLE shirts (  name VARCHAR(40),  size ENUM('x-small', 'small', 'medium', 'large', 'x-large'));</pre><p>From there, you can refer to an Enum using one of its string values:</p><pre>INSERT INTO shirts (name, size) VALUES ('dress shirt','large'),        ('t-shirt','medium'),       ('polo shirt','small');  SELECT name, size FROM shirts WHERE size = 'medium';UPDATE shirts SET size = 'small' WHERE size = 'large';</pre><p>With regards to the tri-state issue, we can implement one as follows:</p><pre>CREATE TABLE employee (  name VARCHAR(50),  security_clearance ENUM('enhanced', 'secret', 'none'));</pre><p>Now, trying to insert an invalid value into an Enum column will result in an error and will fail:</p><img alt="enum_error (33K)" src="https://www.navicat.com/link/Blog/Image/2021/20211208/enum_error.jpg" height="253" width="522" /><h1 class="blog-sub-title">Creating and Using Enums in PostgreSQL</h1><p>In PostgreSQL, Enum types are created using the CREATE TYPE command:</p><pre>CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy');</pre><p>Once created, the Enum type can be used in table much like any other type:</p><pre>CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy');CREATE TABLE person (    name text,    current_mood mood);INSERT INTO person VALUES ('Moe', 'happy');SELECT * FROM person WHERE current_mood = 'happy'; name | current_mood ------+-------------- Moe  | happy(1 row)</pre><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we saw how to represent tri-state data, and other discreet values, in MySQL and PostgreSQL, using Enumerated Types. But what about other database types? Do they not support tri-state data? They do, but using different data types.  We'll explore these next week.</p></body></html>]]></description>
</item>
<item>
<title>The Perils of Testing SQL in Production</title>
<link>https://www.navicat.com/company/aboutus/blog/1819-the-perils-of-testing-sql-in-production.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The Perils of Testing SQL in Production</title></head><body><b>Dec 1, 2021</b> by Robert Gravelle<br/><br/><img alt="i-dont-always-test-my-code-but-when-i-do-its-already-in-production" src="https://www.navicat.com/link/Blog/Image/2021/20211130/i-dont-always-test-my-code-but-when-i-do-its-already-in-production.jpg" height="383" width="614" /><p>How many times have you found a query to be sufficiently performant when testing against sanitized data, only to see it stall once in production? It happens all the time, due to differences between the environments such as workload and volume of data.  This may lead you to try out your query in production. After all, the fastest way to tune a query for  production is on the production server, is it not? While correct, there are many dangers awaiting those foolish enough to tempt fate with such a cavalier disregard for safeguards and protocols. In this blog, we'll explore some of the risks associated with testing queries in production.</p><h1 class="blog-sub-title">Some Risks to Consider</h1><p>Those who do would be wise to remember that the point of a "test" is that you are testing something that runs the risk of "Bad Things" happening.  You might not be able to think of any risks, but that's because you don't - and can't - know what those are...until you run your tests. Some of the Bad Things that can happen include:</p><h3>Logical Data Corruption</h3><p>In the case of INSERT and UPDATE statements, these are by their very nature likely to create records with spurious or malformed data, or that doesn't implement referential integrity correctly. Moreover, the bad data can break logical assumptions that previously tested and well-behaved applications hold about the data. Should an application encounter a faulty record, it may result in anything from "wrong answers" to crashing your entire site until the corruption is identified and manually fixed.</p><h3>Performance Degradation</h3><p>A common scenario is that your test app is doing a table scan that you didn't identify in your dev instance because it only contains about 10,000 records, compared to the 100 million in production. Once your application commences a table scan on one or more core tables, your prod database can become deadlocked until you kill the queries one-by-one. Worse still, once your app has used up all the available database connections, you won't even be able to open a command shell on the database to kill the queries.</p><h3>Locking Issues</h3><p>If you were to forget to COMMIT a transaction, you could wind up locking half the database because you have uncommitted multi-statement transactions sitting in your database connection pool. As a result, the customer operations could time out and randomly deadlock.</p><h3>End Users See Wrong Data</h3><p>Bad data is a problem at many levels. The <i>badness</i> can range from selling inventory that you don't have to having user accounts without passwords that open you to hacks. A related concern is forgetting to delete test users. Since you're probably the only one who knows about them, they're likely to stick around until a hacker manages to discover your "test123" user with the "password123" password.</p><h3>Unknown and Undefined Risks</h3><p>The are countless other scenarios of disaster and destruction that are limited only by your imagination or those of higher powers who enjoy watching us suffer and struggle...</p><h1 class="blog-sub-title">Conclusion</h1><p>Never think that because you "understand" your code that bad things can't happen. Down that path lies madness. You may get away with your shenanigans for a while, maybe even a good while. Nonetheless, the day will come when things go badly, and when they do, they will go very badly.</p></body></html>]]></description>
</item>
<item>
<title>Unicode and Non-Unicode String Data Types in SQL Server</title>
<link>https://www.navicat.com/company/aboutus/blog/1813-unicode-and-non-unicode-string-data-types-in-sql-serverv.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Unicode and Non-Unicode String Data Types in SQL Server </title></head><body><b>Nov 19, 2021</b> by Robert Gravelle<br/><br/><p>SQL Server provides a number of data types that support all types of data that you may want to store.  As you may have guessed, data type is an attribute that specifies the type of data that a column can store. It can be an integer, character string, monetary, date and time, and so on. One data type that causes some confusion among database designers and developers are those for storing character strings. A character string is a series of characters manipulated as a group.  In the context of relational databases, character string data types are those which allow you to store either fixed-length (char) or variable-length data (varchar).  Moreover, SQL Server splits its string types into two broad categories: Unicode and non-Unicode. These equate to nchar, nvarchar, and ntext for Unicode types and char, varchar/varchar (max) and text for non-Unicode. In today's blog, we'll compare the two categories to decide when to use one over the other.</p><h1 class="blog-sub-title">Tracing the Roots of Unicode and Non-Unicode Data Types</h1><p> Nchar is short for "NATIONAL CHARACTER", nvarchar stands for "NATIONAL CHARACTER VARYING", and ntext is the ISO synonym for "NATIONAL TEXT". Originally intended for pre-Unicode multibyte encodings like <a class="default-links" href="https://en.wikipedia.org/wiki/JIS_encoding" target="_blank">JIS encoding</a> for Asian characters. The idea was that VARCHAR would continue to be utilized for ASCII, with NVARCHAR being employed for non-ASCII characters.</p> <p>This use-case was designed when the Internet was still in its infancy and before the Unicode project had taken off. In those days, Asian languages in particular, employed their own specific - and mutually incompatible - encodings, with GB for mainland Chinese, JIS/SJIS for Japanese, BIG5 in Hong Kong and Taiwan, CNS in Taiwan, etc. However, all of that changed with the emergence of the Unicode project encodings, as database vendors realized that it was easier to just allow VARCHAR itself to support multibyte character encodings, and use Character Sets and Collations to deal with specific encodings. For instance, you can use UTF-8 to encode any character you need in any language your applications need to support. Thus, the need for a whole group of character data types that were specific to "NATIONAL CHARACTER" soon faded away.</p><p>Today, in many modern DB engines, "NVARCHAR" and "NATIONAL CHARACTER VARYING" are really just aliases for VARCHAR, with the actual implementation being virtually (if not exactly) identical. Having said that, SQL Server does treat the two differently. As stated in the docs:</p><blockquote>The key difference between varchar and nvarchar is the way they are stored, varchar is stored as regular 8-bit data(1 byte per character) and nvarchar stores data at 2 bytes per character. Due to this reason, nvarchar can hold up to 4000 characters and it takes double the space as SQL varchar.</blockquote><h1 class="blog-sub-title">Which to Use?</h1><p>As specified above, the biggest concern when deciding between types is the amount of storage used. For example, nvarchar uses 2 bytes per character, whereas varchar uses 1. Thus, nvarchar(4000) uses the same amount of storage space as varchar(8000). Hence, if you have requirements to store UNICODE or multilingual data, nvarchar is the best choice. Varchar stores ASCII data and should be your data type of choice for normal use. Another consideration is that joining a VARCHAR column to a NVARCHAR (and vice-versa) in queries can lead to a considerable performance hit.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we compared SQL Server's Unicode and Non-Unicode String Data Types to decide when to use one over the other.</p></body></html>]]></description>
</item>
<item>
<title>The Purpose of WHERE 1=1 in SQL Statements</title>
<link>https://www.navicat.com/company/aboutus/blog/1812-the-purpose-of-where-1-1-in-sql-statements.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The Purpose of WHERE 1=1 in SQL Statements</title></head><body><b>Nov 8, 2021</b> by Robert Gravelle<br/><br/><p>Have you ever seen a WHERE 1=1 condition in a SELECT query. I have, within many different queries and across many SQL engines. The condition obviously means WHERE TRUE, so it's just returning the same query result as it would without the WHERE clause. Also, since the query optimizer would almost certainly remove it, there's no impact on query execution time. So, what is the purpose of the WHERE 1=1? That is the question that we're going to answer here today!</p><h1 class="blog-sub-title">Does WHERE 1=1 Improve Query Execution?</h1><p>As stated in the introduction, we would expect the query optimizer to remove the hard-coded WHERE 1=1 clause, so we should not see a reduced query execution time. To confirm this assumption, let's run a SELECT query in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> both with and without the WHERE 1=1 clause.</p><p>First, here's a query against the Sakila Sample Database that fetches customers who rented movies from the Lethbridge store:</p><img alt="without 1=1.jpg" src="https://www.navicat.com/link/Blog/Image/2021/20211108/without 1=1.jpg" height="481" width="356" /><p>The execution time of 0.004 seconds (highlighted with a red outline) can bee seen at the bottom of the Messages tab.</p><p>Now, let's run the same query, except with the addition of the WHERE 1=1 clause:</p><img alt="with 1=1.jpg" src="https://www.navicat.com/link/Blog/Image/2021/20211108/with 1=1.jpg" height="481" width="356" /><p>Again, the execution time was 0.004 seconds.  Although a query's run time can fluctuate slightly, depending on many factors, it is safe to say that the WHERE 1=1 clause had no effect.</p><p>So, why use it then? Simply put, it's...</p><h1 class="blog-sub-title">A Matter of Convenience</h1><p>The truth of the matter is that the WHERE 1=1 clause is merely a convention adopted by some developers to make working with their SQL statements a little easier, both in static and dynamic form. </p><h3>In Static SQL</h3><p>When adding in conditions to a query that already has WHERE 1=1, all conditions thereafter will contain AND, so it's easier when commenting out conditions on experimental queries.</p><img alt="with _in_static_sql (35K)" src="https://www.navicat.com/link/Blog/Image/2021/20211108/with%20_in_static_sql.jpg" height="253" width="400" /><p>This is similar to another technique where you'd have commas before column names rather than after. Again, it's easier for commenting:</p><img alt="commas (19K)" src="https://www.navicat.com/link/Blog/Image/2021/20211108/commas.jpg" height="200" width="268" /><h3>In Dynamic SQL</h3><p>It's also a common practice when building an SQL query programmatically. It's easier to start with 'WHERE 1=1 ' and then append other criteria such as ' and customer.id=:custId', depending on whether or not a customer ID is provided. This allows the developer to append the next criterion in the query starting with 'and ...'.  Here's a hypothetical example:</p><pre>stmt  = "SELECT * "stmt += "FROM TABLE "stmt += "WHERE 1=1 "if user chooses option a then stmt += "and A is not null "if user chooses option b then stmt += "and B is not null "if user chooses option b then stmt += "and C is not null "if user chooses option b then stmt += "and D is not null "</pre><h1 class="blog-sub-title">Conclusion</h1><p>In this blog, we learned the answer to the age-old question of "what is the purpose of the WHERE 1=1?" It's not an advanced optimization technique, but a style convention espoused by some developers.</p></body></html>]]></description>
</item>
<item>
<title>What Is SQLite and How Does It Differ from MySQL?</title>
<link>https://www.navicat.com/company/aboutus/blog/1798-what-is-sqlite-and-how-does-it-differ-from-mysql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>What Is SQLite and How Does It Differ from MySQL?</title></head><body><b>Nov 2, 2021</b> by Robert Gravelle<br/><br/><p>SQLite and MySQL are equally popular open source Relational Database Management Systems (RDBMS). Both are fast, cross-platform, robust, and feature-rich. Yet, beyond these similarities, the two databases are dissimilar in several important respects.  Since you are probably more familiar with MySQL, this tutorial will list SQLite's most important features, as well as dissimilitudes to MySQL, all with the goal of steering you towards the product that will best suit your needs.</p><h1 class="blog-sub-title">Storage and Portability</h1><p>SQLite was designed and built with storage and portability in mind. This is apparent when viewing it's main design features:<ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Built with the C language.</li><li>Implements an embedded, server-less, zero-configuration, transactional SQL database engine.</li><li>Does not have a separate server process (unlike most other SQL databases).</li><li>SQLite reads and writes directly to ordinary disk files.</li><li>All tables, indices, triggers, and views, are contained within a single disk file.</li><li>The database file format is cross-platform and may be copied between 32-bit and 64-bit systems.</li></ul><p>The SQLite library is about 250 KB in size, while the MySQL server is about 600 MB. Moreover, no configurations are required, and the process can be done using minimal support.</p><p>Before copying or exporting a MySQL database, you need to condense it into a single file. For larger databases, this can be a time-consuming process.</p><h1 class="blog-sub-title">Security and Ease of Setup</h1><p>As alluded to in the previous section, SQLite requires little to no configuration, making it extremely easy to set up. On the other hand, MySQL requires significantly more configuration as compared to SQLite. As the same time, MySQL also has more setup guides available to help with this.</p><p>SQLite does not have an inbuilt authentication mechanism. Hence, the database files can be accessed by anyone.  Meanwhile, MySQL comes with many inbuilt security features. This includes authentication with a username, password, and connection over SSH.</p><h1 class="blog-sub-title">Multiple Access and Scalability</h1><p>SQLite does not include user management functionality and so, it is not suitable for multiple user access. MySQL has a fine-grained user management system which can handle multiple users and grant various levels of access.</p><p>In terms of scalability, SQLite is well suited to for smaller databases. As the database grows the memory requirement also increases, SQLite's performance will degrade. Adding to this issue is that performance optimization is more difficult to achieve when using SQLite. On the other hand, MySQL is easily scalable and can handle very large databases, including tables with billions of rows!</p><h1 class="blog-sub-title">Database Administration</h1><p>The are a number of free and commercial grade graphical database administration tools for both SQLite and MySQL. For SQLite, there's SQLite Administrator. It helps you to create, design and administrate SQLite database files. The SQL code editor helps you to quickly write SQL queries and includes features such as code completion and highlighting. </p><p>MySQL's free graphical administration tool is MySQL Workbench. It's a unified visual tool for database architects, developers, and DBAs. MySQL Workbench provides data modeling, SQL development, and comprehensive administration tools for server configuration, user administration, backup, and  more. MySQL Workbench is available on Windows, Linux and Mac OS X.</p><p>For more professional applications, there's <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlite" target="_blank">Navicat for SQLite</a>, <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>, or <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>. Navicat's powerful and comprehensive GUI provides a complete set of functions for database management and development. Helping you optimize your workflow and productivity, you can quickly and securely create, organize, access, and share information.</p><h1 class="blog-sub-title">Conclusion</h1><p>SQLite is an effective solution for developing small standalone apps and for smaller projects which do not require much scalability.  Meanwhile, MySQL is the superior option when you require access for multiple users with strong security and authentication, as well as for larger datasets.</p></body></html>]]></description>
</item>
<item>
<title>Null Values and the SQL Count() Function</title>
<link>https://www.navicat.com/company/aboutus/blog/1796-null-values-and-the-sql-count-function.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Null Values and the SQL Count() Function</title></head><body><b>Oct 25, 2021</b> by Robert Gravelle<br/><br/><p>Back in March of 2020, the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1312-the-null-value-and-its-purpose-in-relational-database-systems" target="_blank">The NULL Value and its Purpose in Relational Database Systems</a> article presented the NULL value and its special meaning in relational databases. That article also described how to allow NULLs in your database tables and how to reference them in queries. In today's blog, we'll learn how to combine NULLs with the SQL Count() function to achieve a variety of objectives.</p><h1 class="blog-sub-title">Counting Null and Non-null Values</h1><p>The Count() function comes in two flavors: COUNT(*) returns all rows in the table, whereas COUNT(Expression) ignores Null expressions. Hence, if you provide a column name that allows NULL values, then Count() will return all rows that have a non-null value. These two separate uses of Count() provide an important clue as to how we can obtain a count of NULL values for a specific column. And that is by subtracting the non-NULL fields from the Total fields, like so:</p><pre>SELECT COUNT(*) - COUNT(&lt;Column Name&gt;)</pre><p>Now that we know how to count null, non-null, and all rows in a table, let's see an example.  We'll run this query against the customers table of the <a class="default-links" href="https://www.mysqltutorial.org/mysql-sample-database.aspx" target="_blank">MySQL classicmodels Sample Database</a>. Here is that table in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s Table Designer:</p><img alt="customer_table (113K)" src="https://www.navicat.com/link/Blog/Image/2021/20211025/customer_table.jpg" height="384" width="754" /><p>The addressline2 field contains additional address details that are not part of the street name and number. Hence, it's not required for all addresses, as we can see in this sample of table data:</p><img alt="customer_table_2 (97K)" src="https://www.navicat.com/link/Blog/Image/2021/20211025/customer_table_2.jpg" height="518" width="409" /><p>This query uses the Count() function in three ways to show all table rows, the number of populated addressLine2 rows and Nulls:</p><pre>SELECT COUNT(*) AS All_Rows,       COUNT(addressLine2) AS addressLine2_Count,        COUNT(*) - COUNT(addressLine2) AS Null_addressLine2_RowsFROM customers;</pre><p>Here is the above SELECT statement in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s Query Designer, along with the results:</p><img alt="count_addressLine2 (54K)" src="https://www.navicat.com/link/Blog/Image/2021/20211025/count_addressLine2.jpg" height="262" width="540" /><p>As expected, the addressLine2_Count and Null_addressLine2_Rows results add up to the All_Rows count.</p><h1 class="blog-sub-title">Using NULL in Content Analytics </h1><p>That fact that the COUNT(Expression) version of the Count() function ignores Null expressions can be extremely helpful in compiling statistics about table data, especially when combined with other functions such as the SQL IF() function, which is basically the SQL equivalent of the Ternary Operator:<pre>IF(predicate, true-value, false-value)</pre><p>If the predicate is true, IF evaluates to the true-value, or 1 in the query below. If the predicate is false, it evaluates to the false-value, or NULL, as seen in the statement below. The COUNTs then tabulate each row where the IFs evaluate to 1, i.e., where the predicate is true:</p><pre>SELECT count(IF(country = 'Australia', 1, NULL)) as Australia_Count,        count(IF(country = 'Germany', 1, NULL)) as Germany_Count,        count(IF(country = 'Canada' OR country = 'USA', 1, NULL)) as North_America_Count,        count(IF(country like 'F%', 1, NULL)) as F_Countries_Count,        count(IF(creditLimit between 20000 and 1000000, 1, NULL)) as CreditLimit_Range_Count,        count(*) as Total_CountFROM customersWHERE dob >= '1960-01-01';</pre><p>Here is the query and results in Navicat:</p><img alt="null_with_count_and_if_functions (78K)" src="https://www.navicat.com/link/Blog/Image/2021/20211025/null_with_count_and_if_functions.jpg" height="280" width="702" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to combine NULLs with the SQL Count() function to achieve a variety of objectives. More than a way to count NULL and non-NULL values, when combined with other SQL function such as IF() and SUM(), these can be utilized to compile all sorts of statistics on your data! </p></body></html>]]></description>
</item>
<item>
<title>Understanding SQL Server CROSS APPLY and OUTER APPLY Queries - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/1795-understanding-sql-server-cross-apply-and-outer-apply-queries-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Understanding SQL Server CROSS APPLY and OUTER APPLY Queries - Part 2</title></head><body><b>Oct 19, 2021</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">CROSS APPLY and OUTER APPLY Examples</h1><p>Last blog introduced the APPLY operator and covered how it differs from regular JOINs. In today's follow-up, we'll compare the performance of APPLY to that of an INNER JOIN as well as learn how to use APPLY with table valued functions.</p><h1 class="blog-sub-title">APPLY and INNER JOIN Comparison</h1><p>Recall that, at the end of Part 1, we ran a query made up of two parts: the first query selected data from the Department table and used a CROSS APPLY to evaluate the Employee table for each record of the Department table; the second query joined the Department table with the Employee table to produce the same results:</p><img alt="CROSS APPLY vs INNER JOIN (88K)" src="https://www.navicat.com/link/Blog/Image/2021/20211019/CROSS%20APPLY%20vs%20INNER%20JOIN.jpg" height="555" width="568" /><p>In <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat</a>, we can click on the EXPLAIN button to obtain valuable information about the database execution plan.  Here's what it reveals about the above queries:</p><img alt="explain_ex_1 (176K)" src="https://www.navicat.com/link/Blog/Image/2021/20211019/explain_ex_1.jpg" height="601" width="875" /><p>Although the execution plans for both queries are similar and carry an equal cost, their execution plans do differ somewhat from each other:</p><ul><li>The APPLY query uses a Compute Scalar. It's an operator that is used to calculate a new value from the existing row value by performing a scalar computation operation that results a computed value. These Scalar computations includes conversion or concatenation of the scalar value.  Note that the Compute Scalar operator is not an expensive operator, and adds very little cost to the overall weight of our query, causing a minimal overhead.</li><li>The JOIN query contains an additional Clustered index scan. This occurs when SQL server reads through for the Row(s) from top to bottom in the clustered index, such as when searching for data in non key column. This is a slightly more costly operation than Compute Scalar.</li></ul><h1 class="blog-sub-title">Using the APPLY Operator To Join Table Valued Functions and Tables</h1><p>A table-valued function is a user-defined function that returns data of a table type. The return type of a table-valued function is a table, therefore, you can use the table-valued function just like you would use a table. Joining table valued functions with other tables is what the APPLY operator was designed for.</p><p>Let's create a table-valued function that accepts a DepartmentID as its parameter and returns all the employees who belong to that department. In Navicat, we can create a function by clicking the big <i>function</i> button on the main toolbar and then clicking on <i>New Function</i> on the Function toolbar: </p><img alt="function_button (22K)" src="https://www.navicat.com/link/Blog/Image/2021/20211019/function_button.jpg" height="114" width="454" /><p>Here is the <i>GetAllEmployeesForDepartment</i> function, after clicking the <i>Save</i> button:</p><img alt="GetAllEmployeesForDepartment_function (43K)" src="https://www.navicat.com/link/Blog/Image/2021/20211019/GetAllEmployeesForDepartment_function.jpg" height="215" width="706" /><p>Watch what happens when we join our new function to each department using both CROSS OUTER and OUTER APPLY:</p><img alt="cross_apply_vs_outer_apply (94K)" src="https://www.navicat.com/link/Blog/Image/2021/20211019/cross_apply_vs_outer_apply.jpg" height="534" width="575" /><p>In each case, the query passes the DepartmentID for each row from the outer table expression and evaluates the function for each row, similar to a correlated subquery. Whereas the CROSS APPLY returned only correlated data, the OUTER APPLY returned non-correlated data as well, which resulted in NULLs for the missing columns.</p><p>We could not replace the CROSS/OUTER APPLY in the above queries with an INNER JOIN/LEFT OUTER JOIN. Doing so would produce the error "The multi-part identifier "D.DepartmentID" could not be bound.". This is because the execution context of the outer (JOINed) query differs from that of the function (or a derived table).  Thus, you cannot bind a value/variable from the outer query to the function as a parameter. Hence the APPLY operator is required for such queries.</p><h1 class="blog-sub-title">Conclusion</h1><p>That concludes our look at the CROSS APPLY and OUTER APPLY statements. So, in summary, while the APPLY operator is required when you have to use a table-valued function in the query, it may be utilized with inline SELECT statements as well.</p></body></html>]]></description>
</item>
<item>
<title>Navicat 16 Preview</title>
<link>https://www.navicat.com/company/aboutus/blog/1792-navicat-16-preview.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Navicat 16 Preview</title></head><body><b>Oct 11, 2021</b> by Robert Gravelle<br/><br/><p>Navicat 15 was released with much fanfare back in November of 2019. It came packed with many new features and improvements, most notably in data transfers, the SQL Builder, and modeling. It also added Data Visualization, Dark Mode and native Linux support. Almost two years later to the day, its time to announce the upcoming release of Navicat 16! It's currently <a class="default-links" href="https://navicat.com/en/download/navicat-16-beta" target="_blank">downloadable in Beta mode</a>, with the official release to be announced shortly. While we're waiting for that, this blog will outline some of the most note-worthy features and improvements. </p><h1 class="blog-sub-title">Data Generation</h1><p>Most organizations won't permit the copying of production data into test environments, and rightly so! Navicat 16's Data Generation tool assists you in creating a large volume of testing data. It can create complex data over multiple related tables. The entire process is guided by a multi-step wizard. On the table selection screen, you can choose the exact order in which to populate the tables, so that no Foreign Key constraints are violated:</p><img alt="table_order (107K)" src="https://www.navicat.com/link/Blog/Image/2021/20211011/table_order.jpg" height=auto width="800" /><p>Navicat then shows a detailed preview of the data that will be generated. There, you can choose to regenerate the data for each table, and even edit it in place manually.</p><img alt="test_data_preview (86K)" src="https://www.navicat.com/link/Blog/Image/2021/20211011/test_data_preview.jpg" height="672" width="711" /><h1 class="blog-sub-title">Charts</h1><p>Although charts are not new to Navicat 16, this latest version supports more data sources and chart types than any previous version. With a focus on usability and accessibility, Navicat can deliver information and present your findings in dashboard for sharing to a wider audience than ever before. Moreover, the process of creating a chart and/or dashboard has been streamlined into clear steps: </p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Create Data Source</li><li>Design Chart</li><li>Present Dashboard</li></ul><img alt="charts (85K)" src="https://www.navicat.com/link/Blog/Image/2021/20211011/charts.jpg" height="672" width="897" /><p>All of your work may be associated with a workspace to keep all of your data visualizations and presentations well organized.</p><h1 class="blog-sub-title">On-Prem Server</h1><p>The On-Prem Server is a brand new Navicat product. It's an add-on cloud service which works closely with Navicat 16 that gives you with the option to host a cloud environment for storing Navicat objects internally at your location. Using the On-Prem Server, you can sync and share your connection strings, queries and models from multiple devices as well as share them with your team members from anywhere, anytime.</p><h1 class="blog-sub-title">Collaboration</h1><p>Those of you who are familiar with Navicat 15 are probably well aware of Navicat Cloud. Now, in version 16, the popular cloud service adds Charts and Code Snippets. Navicat Cloud helps keep your team be productive as well as to collaborate more effectively and efficiently.</p><p>Navicat Cloud Portal provides a comprehensive tool set to manage your files and projects. It simplifies user management activities and allows you to monitor your cloud services through one interface, improving operational efficiency while reducing management costs.</p><h1 class="blog-sub-title">UI/UX Improvements</h1><p>Not only has the UX been completely updated, but, many existing features, such as Connection Profile, Query Summary, and Value Picker have been updated to increase the overall efficiency of your database development.</p><img alt="navicat_16 (224K)" src="https://www.navicat.com/link/Blog/Image/2021/20211011/navicat_16.jpg" height="672" width="931" /><h1 class="blog-sub-title">Conclusion</h1><p>Today's blog provided an overview of Navicat 16's many new features and improvements.  In the coming weeks, we'll be exploring these in much more detail.</p><p>If you'd like to give Navicat 16 Beta a try, you can download it <a class="default-links" href="https://navicat.com/en/download/navicat-16-beta" target="_blank">here</a>.</p></body></html>]]></description>
</item>
<item>
<title>Understanding SQL Server CROSS APPLY and OUTER APPLY Queries - Part 1</title>
<link>https://www.navicat.com/company/aboutus/blog/1783-understanding-sql-server-cross-apply-and-outer-apply-queries-part.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Understanding SQL Server CROSS APPLY and OUTER APPLY Queries - Part 1</title></head><body><b>Sep 27, 2021</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 1: APPLY vs JOIN</h1><p>As you are probably aware, JOIN operations in SQL Server are used to join two or more tables. However, in SQL Server, JOIN operations cannot be used to join a table with the output of a table valued function.  In case you have not heard of table valued functions, these are functions that return data in the form of tables. In order to allow the  joining of two table expressions SQL Server 2005 introduced the APPLY operator.  In this blog, we'll learn how the APPLY operator differs from regular JOINs.</p><h1 class="blog-sub-title">About CROSS APPLY and OUTER APPLY</h1><p>SQL Server APPLY operator comes in two variations: CROSS APPLY and OUTER APPLY:</p><ul><li>The CROSS APPLY operator returns only those rows from the left table expression (in its final output) if it matches with the right table expression.<br/>Thus, the CROSS APPLY is similar to an INNER JOIN, or, more precisely, like a CROSS JOIN with a correlated sub-query with an implicit join condition of 1=1.</li><li>The OUTER APPLY operator returns all the rows from the left table expression regardless of its match with the right table expression. For those rows for which there are no corresponding matches in the right table expression, it returns NULL values in columns of the right table expression.<br/>Hence, the OUTER APPLY is equivalent to a LEFT OUTER JOIN.</li></ul><p>Although the same query can be written using a normal JOIN, the need of APPLY arises when you have a table-valued expression on the right side and you want this table-valued expression to be evaluated for each row from the left table expression.  Moreover, there are cases where the use of the APPLY operator can boost query performance. </p><p>Let's explore the APPLY operator further with some examples.</p><h1 class="blog-sub-title">The Sample Data</h1><p>We'll execute our queries against two new tables that we'll create in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>. Here is the design for the Department table:</p><img alt="Department_table_design (47K)" src="https://www.navicat.com/link/Blog/Image/2021/20210927/Department_table_design.jpg" height="234" width="604" /><p>Here is the design for the Employee table:</p><img alt="Employee_table_design (51K)" src="https://www.navicat.com/link/Blog/Image/2021/20210927/Employee_table_design.jpg" height="280" width="586" /><p>Execute the following SQL in the Navicat Query Editor to populate the tables:</p><pre>INSERT [Department] ([DepartmentID], [Name])  VALUES (1, N'Engineering') INSERT [Department] ([DepartmentID], [Name])  VALUES (2, N'Administration') INSERT [Department] ([DepartmentID], [Name])  VALUES (3, N'Sales') INSERT [Department] ([DepartmentID], [Name])  VALUES (4, N'Marketing') INSERT [Department] ([DepartmentID], [Name])  VALUES (5, N'Finance') GO  INSERT [Employee] ([EmployeeID], [FirstName], [LastName], [DepartmentID]) VALUES (1, N'Orlando', N'Gee', 1 ) INSERT [Employee] ([EmployeeID], [FirstName], [LastName], [DepartmentID]) VALUES (2, N'Keith', N'Harris', 2 ) INSERT [Employee] ([EmployeeID], [FirstName], [LastName], [DepartmentID]) VALUES (3, N'Donna', N'Carreras', 3 ) INSERT [Employee] ([EmployeeID], [FirstName], [LastName], [DepartmentID]) VALUES (4, N'Janet', N'Gates', 3 ) </pre><h1 class="blog-sub-title">CROSS APPLY vs INNER JOIN</h1><p>Here is a query that is made up of two parts: the first query selects data from the Department table and uses a CROSS APPLY to evaluate the Employee table for each record of the Department table; the second query simply joins the Department table with the Employee table to produce the same results:</p><img alt="CROSS APPLY vs INNER JOIN (88K)" src="https://www.navicat.com/link/Blog/Image/2021/20210927/CROSS%20APPLY%20vs%20INNER%20JOIN.jpg" height="555" width="568" /><h1 class="blog-sub-title">Coming up in Part 2</h1><p>Having introduced the APPLY operator in this blog, Part 2 will outline the differences between using APPLY and JOIN and provide additional uses for APPLY.</p></body></html>]]></description>
</item>
<item>
<title>Overview of RDBMS Index Types</title>
<link>https://www.navicat.com/company/aboutus/blog/1782-overview-of-rdbms-index-types-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Overview of RDBMS Index Types</title></head><body><b>Sep 17, 2021</b> by Robert Gravelle<br/><br/><p>Recently, the subject of database indexes has come up a couple of times, specifically, in the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1764-the-downside-of-database-indexing.html" target="_blank">The Downside of Database Indexing</a> and <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1765-the-impact-of-database-indexes-on-write-operations.html" target="_blank">The Impact of Database Indexes On Write Operations</a> articles. Both pieces alluded to the fact that relational databases support a number of index types. Today's blog will provide an overview of the most common ones.</p><h1 class="blog-sub-title">The Role of Database Indexes</h1><p>In RDBMS (Relational Database Management Systems), indexes are a special object that allow the user to quickly retrieve records from the database. Typically, an index is implemented as a lookup table that has only two columns: the first column contains a copy of the primary or candidate key of a table; the second column contains a set of pointers for holding the address of the disk block where that specific key value is stored.</p><h1 class="blog-sub-title">Two Types of Indexing Methods</h1><p>Index types may be classified based on their indexing attributes. These fall into the two main categories of Primary andSecondary Indexing.</p><p>A Primary Index is an ordered file whose records are of fixed length with two fields. The first field of the index replicates the primary key of the data file in an ordered manner, and the second field contains a pointer that points to the data-block where a record containing the key is available.</p><p>Secondary indexes are indexes that store the primary key value rather than store a pointer to the data. The advantage is that, by accessing data through a primary key, there's no need for any additional data lookup, as all of the data you need can be found in the primary key's leaf pages.</p><p>The secondary Index in DBMS can be generated by a field which has a unique value for each record, and it should be a candidate key. It is also known as a non-clustering index. This two-level database indexing technique is used to reduce the mapping size of the first level.</p><h1 class="blog-sub-title">Dense vs. sparse Indexes</h1><p>In a dense index, a record is created for every search key valued in the database. This helps you to search faster but needs more space to store index records. In this Indexing, method records contain search key value and points to the real record on the disk.</p><p>A Sparse Index is an index record that appears for only some of the values in the file. Sparse Indexes helps you to resolve the issues of dense Indexing in DBMS. In this method of indexing, a range of index columns stores the same data block address, and when data needs to be retrieved, the block address will be fetched. Since sparse indexes only store index records for some search-key values, it needs less space, less maintenance overhead for insertion, and deletions.  The drawback is that they are slower compared to dense indexes for locating records.</p><h3>An Example of Primary and Secondary Indexing</h3><p>In Navicat, fields that are part of the Primary Key are identified on the Fields tab of the Table Designer:</p><img alt="pk (35K)" src="https://www.navicat.com/link/Blog/Image/2021/20210917/pk.jpg" height="162" width="522" /><p>Secondary indexes are often required on tables so that users can search on fields that are not part of the Primary Key. In Navicat, all secondary indexes are listed on the Indexes tab:</p><img alt="secondary_index (58K)" src="https://www.navicat.com/link/Blog/Image/2021/20210917/secondary_index.jpg" height="257" width="676" /><p>By clicking the EXPLAIN button, we can see what indexes the database is using to fetch records for a given query:</p><img alt="explain (57K)" src="https://www.navicat.com/link/Blog/Image/2021/20210917/explain.jpg" height="266" width="778" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog provided an overview of the most common RDBMS index types and provided an example using Navicat Premium. If you're interested in learning more about <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>, you can try it for free for 14 days!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Changing a Column's Data Type In Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/1780-changing-a-column-s-data-type-in-relational-databases.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Changing a Column's Data Type In Relational Databases</title></head><body><b>Sep 10, 2021</b> by Robert Gravelle<br/><br/><p>Over time, system requirements change. These may necessitate the creation of new databases, tables, and columns as well as the altering of existing table structures.  Changing a column's data type may be a trivial operation or a difficult one, depending on the source and target data types, as well as the data contained within the column. This blog will address some of the common challenges in changing a column's data type, along with strategies which you can employ to facilitate the process.</p><h1 class="blog-sub-title">Alter Table Statement</h1><p>The structure (schema) of existing tables can be altered using the ALTER TABLE statement. It's a Data Definition Language (DDL) statement, just like CREATE TABLE, DROP FUNCTION, and GRANT. It's basic syntax is:</p><pre>ALTER TABLE table_to_change    what_to_change    (additional_arguments)</pre><p>The ALTER TABLE statement may be utilized to change all sorts of table properties, from changing the table name to adding, dropping, and modifying columns. </p><h3>One Statement, Varying Syntax</h3><p>You may have noticed that, after the first line, the ALTER TABLE statement's syntax becomes quite vague. That's because it varies from vendor to vendor. For example:</p><h4>In SQL Server</h4><pre>ALTER TABLE table_nameALTER COLUMN column_name column_type;</pre>   <h4>In PostgreSQL</h4><pre>ALTER TABLE table_nameALTER COLUMN column_name TYPE column_definition;</pre><h4>In Oracle, MySQL, and MariaDB</h4><pre>ALTER TABLE table_nameMODIFY column_name column_type;</pre><h1 class="blog-sub-title">A Simple Example</h1><p>Some databases, such as Oracle, don't allow you to run an ALTER query on tables that contain data.  If you do, you'll get an error such as this:</p><pre>Error report:SQL Error: ORA-01439: column to be modified must be empty to change datatype01439. 00000   column to be modified must be empty to change datatype</pre><p>However, most database types do allow you to make changes to populated tables. </p><p>Here's a MySQL table in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s Table Designer that shows the column definitions:</p><img alt="brands_table_design (95K)" src="https://www.navicat.com/link/Blog/Image/2021/20210910/brands_table_design.jpg" height="471" width="694" /><p>We can execute an ALTER TABLE statement to increases the <i>name</i> (VARCHAR) column's capacity to 255 characters: </p><img alt="alter_name_column (27K)" src="https://www.navicat.com/link/Blog/Image/2021/20210910/alter_name_column.jpg" height="253" width="397" /><h1 class="blog-sub-title">Converting a Column from VARCHAR to INT</h1><p>It's not uncommon to see VARCHAR columns that contain numeric data.  In some cases, it may be advantageous to change its type to a numeric type. In Navicat, we can set a column's type by choosing it from a drop-down:</p><img alt="brand_code_type (35K)" src="https://www.navicat.com/link/Blog/Image/2021/20210910/brand_code_type.jpg" height="230" width="537" /><p>Changes are made once the Save button is clicked. If you forget, Navicat will prompt you to save your changes when you close the Table Designer.</p><h1 class="blog-sub-title">Data Truncated Error</h1><p>You should avoid diminishing the size of a column's data type whenever possible; otherwise, you'll get a Data Truncated error, such as:</p><pre>#1265 - Data truncated for column 'name' at row 2</pre><p>There are no hard and fast rules for dealing with this error, but generally, you can update the value(s) in question yourself and then re-run the ALTER TABLE statement. For instance, here's a statement that truncates all <i>name</i> values to ten characters:</p><img alt="update_brands (10K)" src="https://www.navicat.com/link/Blog/Image/2021/20210910/update_brands.jpg" height="69" width="330" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog outlined some of the common challenges in changing a column's data type, along with strategies which you can employ to facilitate the process.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>?  You can try it for free for 14 days!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Floating Point Rounding Errors in MySQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1768-floating-point-rounding-errors-in-mysql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Floating Point Rounding Errors in MySQL</title></head><body><b>Sep 3, 2021</b> by Robert Gravelle<br/><br/><p>Although MySQL DECIMAL and NUMERIC data types are both fixed-point values, they are still susceptible to rounding errors.  The reason is that, no matter how many digits a type can accommodate (the maximum number of digits for DECIMAL is 65!) that number is still fixed. Moreover, DECIMAL columns can be assigned a precision or scale that could have the potential affect of truncation to the allowed number of digits.  </p>   <figure>  <figcaption>Decimal Column In Navicat Table Designer</figcaption>  <img alt="decimal_column_in_navicat (43K)" src="https://www.navicat.com/link/Blog/Image/2021/20210903/decimal_column_in_navicat.jpg" height="163" width="620" /></figure><p>I became aware of potential rounding errors in MySQL when a reader asked me why a couple of similar queries were returning slightly different DECIMAL values in calculations. This prompted me to go on a journey of discovery. In today's blog, I would like to share some of what I learned about floating point rounding in MySQL.</p><h1 class="blog-sub-title">A Tale of Two Queries</h1><p>Here are the queries I used to show the discrepancy:</p><figure>  <figcaption>Hourly Payments Calculation Using a Subquery</figcaption>  <img alt="subquery (37K)" src="https://www.navicat.com/link/Blog/Image/2021/20210903/subquery.jpg" height="264" width="397" /></figure><figure>  <figcaption>Hourly Payments Calculation Using Group By With Rollup</figcaption>  <img alt="group_by (57K)" src="https://www.navicat.com/link/Blog/Image/2021/20210903/group_by.jpg" height="354" width="547" /></figure><p><i>Note: some rows were removed from the Group By With Rollup query to reduce the height of the image.</i></p><p>The reader was calculating employee salaries, but I did not have an identical table to query, so I used the most similar table that I could find, and that was the payments table of the <a class="default-links" href="https://www.mysqltutorial.org/mysql-sample-database.aspx" target="_blank">classicmodels sample database</a>:</p><img alt="payments_table (114K)" src="https://www.navicat.com/link/Blog/Image/2021/20210903/payments_table.jpg" height="560" width="452" /><p>In this context, perhaps calculating hourly payments does not make much sense, but the queries did highlight the rounding differences between the two SELECT statements (4256.65347<strong>5</strong> vs. 4256.65347<strong>6</strong>).</p><p>So, why does SUM using a subquery and the GROUP BY WITH ROLLUP produce different results?</p><h1 class="blog-sub-title">Floating-point Approximate Values Versus Fixed-point Exact Values</h1><p>The floating-point (approximate value) types are FLOAT, REAL, and DOUBLE, while the fixed-point (exact value) types are INTEGER, SMALLINT, DECIMAL, and NUMERIC. Floating-point means the decimal point can be placed anywhere relative to the significant digits of the number with the actual position being indicated separately. Meanwhile, a fixed-point value is an integer that is scaled by a specific factor.</p><p>Back in version 5.5, MySQL added support for precision math, which included a library for fixed-point arithmetic that replaced the underlying C library and allowed operations to be handled in the same manner across different platforms. Since this update, if no approximate values or strings are being used in a calculation, expressions are evaluated using DECIMAL exact value arithmetic with precision of 65 digits. For GROUP BY functions, STDDEV() and VARIANCE() return DOUBLE, an approximate floating-point type, while SUM() and AVG() return a DECIMAL for exact-value arguments and a DOUBLE for approximate value.</p><p>Another ramification of the new MySQL library for fixed-point arithmetic is that type conversion is now handled using floating-point values. Thus, the results of type conversion may vary and can be affected by factors such as computer architecture, the compiler version or even the optimization level. One way to avoid these problems is to use an explicit CAST() rather than the implicit conversion.</p><h1 class="blog-sub-title">Which Result is Correct?</h1><p>Getting back to the initial queries, which value is more accurate and which one is the correct way to obtain the SUM? The truth is, neither is exactly accurate, but, by using a little algebra, the query can be simplified to yield an accurate result:</p><img alt="simplified_query (30K)" src="https://www.navicat.com/link/Blog/Image/2021/20210903/simplified_query.jpg" height="266" width="383" /><p>The key to accurate rounding is to work with whole numbers in as many initial steps as possible.</p></body></html>]]></description>
</item>
<item>
<title>Working With the MySQL Slow Query Log</title>
<link>https://www.navicat.com/company/aboutus/blog/1767-working-with-the-mysql-slow-query-log.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Working With the MySQL Slow Query Log</title></head><body><b>Aug 27, 2021</b> by Robert Gravelle<br/><br/><p>MySQL provides several different log files that can help you find out what's going on inside your MySQL server instance. These include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>error log</li><li>isam log</li><li>general query log</li><li>binary log</li><li>slow log</li></ul><p>Of these, the slow query log is especially useful for finding inefficient or time-consuming queries, which can adversely affect database and overall server performance. This blog will describe how to read and interpret slow query log output to better debug query performance.</p><h1 class="blog-sub-title">Enabling the Slow Query Log</h1><p>The slow query log consists of SQL statements that take more than <i>long_query_time</i> seconds to execute and require at least <i>min_examined_row_limit</i> rows to be examined. Hence, queries that appear in the slow query log are those that take a substantial time to execute and are thusly candidates for optimization. </p><p>The slow query log is disabled by default so as to save disk space. You can turn it on by setting the <i>--slow_query_log</i> variable to 1 (ON in Navicat). Likewise, providing no argument also turns on the slow query log. Likewise, an argument of 0 (OFF in Navicat) disables the log.<p>In Navicat, you can access system variables using the Server Monitor tool. It's accessible via the Tools main menu command. In the Server Monitor, click on the middle <i>Variables</i> tab and scroll down to see the <i>slow_query_log</i> and <i>slow_query_log_file</i> server variables in the list:</p><img alt="slow_query_log_vars_in_navicat (50K)" src="https://www.navicat.com/link/Blog/Image/2021/20210827/slow_query_log_vars_in_navicat.jpg" height="284" width="595" /><h1 class="blog-sub-title">Reading the Slow Query Log</h1><p>Examining a long slow query log can be a time-consuming task due to the huge amount of content to sift through. Here is what a typical entry in the slow log file might look like: </p><pre># Time: 140905  6:33:11# User@Host: dbuser[dbname] @ hostname [1.2.3.4]# Query_time: 0.116250  Lock_time: 0.000035 Rows_sent: 0  Rows_examined: 20878use dbname;SET timestamp=1409898791;...SLOW QUERY HERE...</pre><p>To make reading the log contents easier, you can use the <i>mysqldumpslow</i> command-line utility to process a slow query log file and summarize its contents:</p><pre>~ $ mysqldumpslow -a /var/lib/mysql/slowquery.log Reading mysql slow query log from /var/lib/mysql/slowquery.log Count: 2  Time=316.67s (633s)  Lock=0.00s (0s)  Rows_sent=0.5 (1), Rows_examined=0.0 (0), Rows_affected=0.0 (0), root[root]@localhost...SLOW QUERY HERE...</pre><h3>Navicat Query Analyzer</h3><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>'s Query Analyzer tool provides a graphical representation of the query logs that makes interpreting their contents much easier. In addition, the Query Analyzer tool enables you to monitor and optimize query performance, visualize query activity statistics, analyze SQL statements, as well as quickly identify and resolve long running queries. </p><p>In addition to the Slow Query Log, the Query Analyzer collects information about query statements using one of the following methods:</p><ol><li>Retrieve the General Query Log from the server and analyze its information.</li><li>Query the performance_schema database and analyze it for specific performance information.</li></ol><p>You'll find the Query Analyzer section below the Latest Deadlock Query and Process List sections:</p><img alt="query_analyzer (134K)" src="https://www.navicat.com/link/Blog/Image/2021/20210827/query_analyzer.jpg" height="789" width="1046" /><h1 class="blog-sub-title">Conclusion</h1><p>This blog presented a few ways to read slow query log output to better debug the performance of your queries.</p><p>Click <a class="default-links" href="https://www.navicat.com/en/discover-navicat-monitor" target="_blank">here</a> for more details about all of Navicat Monitor's features, or, <a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">download</a> the 14-day fully functional free trial!</p></body></html>]]></description>
</item>
<item>
<title>Identifying Long Running Queries</title>
<link>https://www.navicat.com/company/aboutus/blog/1766-identifying-long-running-queries.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Identifying Long Running Queries</title></head><body><b>Aug 23, 2021</b> by Robert Gravelle<br/><br/><p>Usually, when your database runs slower for an extended period, the culprit is more often than not a "bad" query. That is to say, a query that is not fully optimized, poorly written, or gives users the ability to fetch an unlimited number of rows from the database.  We can alleviate some pain by throwing more resources at the server, but this is really a short term fix and doe not address the underlying issue.  The best course of action is to identify and fix the problem query or queries, which shouldn't be too difficult, given some time and effort. Of course, the first step is to identify which query or queries is/are the stalwarts. There are a few ways to do that, depending on your specific database type. Today's blog will highlight a few strategies for MySQL.</p><h1 class="blog-sub-title">Using the MySQL PROCESSLIST Table</h1><p>The PROCESSLIST Table is one of many metadata tables within the INFORMATION_SCHEMA database. As the name suggests, it maintains information for all processes running within a database instance. There are several ways to access it, as shown in the next several sections.</p><h3>Using the mysqladmin Command Line Tool</h3><p>The mysqladmin command line tool ships with MySQL.  Run it with the flag "processlist" (or "proc" for short) to see currently running processes. Moreover, adding the "statistics" flag (or "stat" for short) will show running statistics for queries since MySQL's last restart:</p><p>Here is some sample output:</p><pre>+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+| Id    | User | Host      | db        | Command | Time | State | Info               | Progress |+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+| 77255 | root | localhost | employees | Query   | 150  |       | call While_Loop2() | 0.000    || 77285 | root | localhost |           | Query   | 0    | init  | show processlist   | 0.000    |+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+Uptime: 781398  Threads: 2  Questions: 18761833  Slow queries: 0  Opens: 2976  Flush tables: 1  Open tables: 101  Queries per second avg: 26.543</pre><p>Since this command runs on the shell interface, you can pipe output to other scripts and tools. The downside is that the PROCESSLIST table's info column is always truncated so it does not provide the full query on longer queries.</p><h3>Querying the MySQL PROCESSLIST Table</h3><p>One way to query the PROCESSLIST table is to run the "show processlist;" query from within MySQL's interactive mode prompt. Navicat users can execute the show processlist query directly within the SQL Editor just like any query:</p><img alt="show_processlist (47K)" src="https://www.navicat.com/link/Blog/Image/2021/20210823/show_processlist.jpg" height="285" width="633" /><p>Note that adding the "full" modifier to the command is sometimes required in order to disable truncation of the Info column. (This is necessary when viewing long queries.)</p><h1 class="blog-sub-title">Using a Monitoring Tool</h1><p>For more in-depth analysis of query performance, many professional database administrators (DBAs) employ a database monitor such as <a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">Navicat Monitor</a>. It has a query analyzer that monitors queries in real time to quickly improve the performance and efficiency of your server. It shows the summary information of all executing queries and lets you easily uncover problematic queries. As you can see in the image below, Navicat Monitor can sort queries by execution time, so that the slowest can be found at a glance:</p><img alt="query_analyzer (125K)" src="https://www.navicat.com/link/Blog/Image/2021/20210823/query_analyzer.jpg" height="621" width="1023" /><h1 class="blog-sub-title">Conclusion</h1><p>In this blog we learned a few easy ways to identify slow queries using the MySQL PROCESSLIST Table as well as Navicat Monitor. </p><p>Click <a class="default-links" href="https://www.navicat.com/en/discover-navicat-monitor" target="_blank">here</a> for more details about all of Navicat Monitor's features, or, <a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">download</a> the 14-day fully functional free trial!</p></body></html>]]></description>
</item>
<item>
<title>The Impact of Database Indexes On Write Operations</title>
<link>https://www.navicat.com/company/aboutus/blog/1765-the-impact-of-database-indexes-on-write-operations.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The Impact of Database Indexes On Write Operations</title></head><body><b>Aug 16, 2021</b> by Robert Gravelle<br/><br/><p>In last week's blog, we learned about the ramifications of poor indexing, as well as how to choose which columns to include as part of a clustered index. In this article, we'll cover how the same indexes that provide better performance for some operations, can add overhead for others.</p><h1 class="blog-sub-title">How Clustered Indexes Affect INSERTs, UPDATEs and DELETEs</h1><p>In general, having indexes on tables comes with the additional cost that more data pages and memory is used. On Clustered tables, the effects of indexes are even more pronounced. A Clustered table is one where a clustered index is used to store the data rows sorted based on the clustered index key values. SELECT statements tend to execute noticeably faster on a clustered table, whereas INSERTs, UPDATEs, and DELETEs require more time, as not only data is updated, but the indexes are updated also. For clustered indexes, the time increase is more significant than on single indexes, as the records have to maintain the correct order in data pages. Whether a new record is inserted, or an existing deleted or updated, this usually requires the records to be reordered.</p><p>INSERTs tend to perform fastest on a table without any indexes. This is because neither re-ordering nor index updating are required. On the same table, executing UPDATEs and DELETEs is the most expensive. The reason is that the database requires most time to find the specific records within the table.</p><p>Conversely, the costs can be higher for a table with a non-optimal clustered index, followed by on tables with a non-clustered index or no indexes at all.</p><p>With regards to SELECT statements, you can reduce the cost of execution by:</p> <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>specifying the list of returned columns, and </li><li>executing the statement on the table where the clustered index is created on the primary key column</li></ul><h1 class="blog-sub-title">Examples of DML Impact</h1><p>We can see the impact of indexes on DML (Data Manipulation Language) statements on the following album table, whose definition shows a large number of indexes:</p><img alt="album_table (63K)" src="https://www.navicat.com/link/Blog/Image/2021/20210816/album_table.jpg" height="360" width="615" /><p>In <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>, we can view index details in the Table Designer' Indexes tab:</p><img alt="album_table_indexes (52K)" src="https://www.navicat.com/link/Blog/Image/2021/20210816/album_table_indexes.jpg" height="209" width="573" /><p>Both the Index Type and method may be chosen via drop-downs, which are tailored specifically to the database type.  Here are the available selections for MySQL 7:</p><img alt="index_drop-downs (14K)" src="https://www.navicat.com/link/Blog/Image/2021/20210816/index_drop-downs.jpg" height="132" width="219" /><p>By running a simple benchmark we can test the insert rate of the current album table with the original definition that includes a Primary Index only:</p><img alt="benchmark_statements (49K)" src="https://www.navicat.com/link/Blog/Image/2021/20210816/benchmark_statements.jpg" height="225" width="544" /><p>Here are the timed results:</p><img alt="benchmark_statements_results (20K)" src="https://www.navicat.com/link/Blog/Image/2021/20210816/benchmark_statements_results.jpg" height="119" width="396" /><p>Inserting data into the table with additional indexes was four times slower in my informal simple bulk tests. There are other factors that can contribute to the slower speed; however, my results provide a representative indication that adding indexes to a table has a direct effect on write performance.</p><h1 class="blog-sub-title">Conclusion</h1><p>As shown, indexes can speed up some queries and slow down others. In this article, we provided some basic guidelines for clustered and non-clustered indexes, as well as which columns are preferred to build indexes on, and which should be avoided. Finding the right balance between the benefits and overhead indexes bring provides optimal performance to your queries and stored procedures.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>?  You can try it for free for 14 days!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>The Downside of Database Indexing</title>
<link>https://www.navicat.com/company/aboutus/blog/1764-the-downside-of-database-indexing.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The Downside of Database Indexing</title></head><body><b>9 Aug, 2021</b> by Robert Gravelle<br/><br/><p>It is common knowledge that judicious use of indexes can help SELECT queries execute significantly faster. This can tempt some database admins (DBAs) to try to milk as much performance gains as possible by adding indexes to every column that might possibly be included in a query. The downside to adding indexes to a table is that they affect the performance of writes. Moreover, improperly created indexes can even adversely affect SELECT queries! Any table configuration where performance suffers due to excessive, improper, or missing indexes is considered to be poor indexing. In today's blog, we'll learn about the ramifications of poor indexing, as well as cover how to choose which columns to include as part of a clustered index.</p><h1 class="blog-sub-title">The Effects of Poor Indexing </h1><p>A poor index can be an index created on a column that doesn't provide easier data manipulation or an index created on multiple columns which, rather than speed up queries, slows them down.</p><p>If indexes are not created properly, the database has to go through more records in order to retrieve the data requested by a query. Therefore, it uses more hardware resources (processor, memory, disk, and network) and makes fetching the data take longer.<p>A table without a clustered index can also be considered as a poor indexing practice in some cases. Execution of a SELECT statement, inserting, updating, and deleting records is in most cases slower on a heap table (i.e. a table without a clustered index) than on a clustered one.</p><h1 class="blog-sub-title">Choosing Columns For Clustered Indexes</h1><p>When you create a table with a primary key (PK) in a relational database, a unique clustered index is automatically created on the primary key column. Although this default action is perfectly acceptable in most cases, this might not be the optimal index for your data.</p><p>The columns that make up a clustered index should form a unique, identity, primary key, or any combination where values are increased for each new entry. As clustered indexes sort the records based on the value, using a column already ordered ascending, such as an identity column, is a good choice.</p><p>A column whose value changes frequently should not be used for a clustered index. The reason is that each change of the column used for the clustered index requires the records to be reordered. This re-ordering can easily be avoided by using a column that is updated less frequently, or ideally, not updated at all.</p><p>Likewise, columns that store large data, such as BLOB columns (text, nvarchar(max), image, etc.), and GUID columns are not ideal for clustered indexes. This is because sorting large values is highly inefficient, and in case of GUID and image columns, doesn't make much sense.</p><p>Finally, a clustered index should not be built on a column already used in a unique index.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned about the ramifications of poor indexing, as well as how to choose which columns to include as part of a clustered index. In an up-coming article, we'll cover how the same indexes that provide better performance for some operations, can add overhead for others.</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>What Is Database Monitoring and Why Is It Useful?</title>
<link>https://www.navicat.com/company/aboutus/blog/1756-what-is-database-monitoring-and-why-is-it-useful.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>What Is Database Monitoring and Why Is It Useful?</title></head><body><b>Jul 29, 2021</b> by Robert Gravelle<br/><br/><p>Databases play a central role in of most business processes and applications. As IT infrastructures become more diverse and sophisticated, it becomes increasingly important to be able to nip database issues in the bud. In simpler times, one or more database administrators (DBAs) could  resolve issues manually as they came up in true fire fighter fashion. Today, that approach is almost certainly doomed to fail. </p><p>Smart DBAs rely on database monitoring to not only pinpoint trouble quickly, but even to predict future issues before they cause real problems.  In this article, we'll examine what database monitors do. In up-coming installments we'll learn more about how they work and explore some best practices for using monitoring software. </p><h1 class="blog-sub-title">Database Monitoring Explained</h1><p>Simply put, database monitoring is the tracking of database performance and resources using key metrics with a goal of enabling high performance and availability to more fully support an organization's application infrastructure.  Categories of common metrics for for database monitoring include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Query details (top CPU, slow running, and most frequent)</li><li>Session details (current user connections and locks)</li><li>Scheduled jobs</li><li>Replication details</li><li>Database performance (buffer, cache, connection, lock)</li></ul><p>Data from each of these categories is analyzed in order to minimize, or ideally, prevent database outages or slowdowns.  The selection of the data points and how they are analyzed will vary based on the type of database. Moreover, the above metrics (and many others) are typically monitored in real time, thus allowing you to identify or predict issues. When done properly, effective database monitoring gives you the opportunity to enhance or optimize your database, in order to augment overall performance.</p><p>To maximize the efficacy of your database monitoring strategy, you should analyze data across a range of categories, with the intention of minimizing or preventing lags or unavailability. In that regards, note that different types of databases will require different metrics (and/or data points) to be analyzed.</p><p>Ideally, database monitoring tracks the performance of both hardware and software by taking frequent snapshots of performance indicators. This allows you to identify any changes, identify bottlenecks, and pinpoint the exact moment problems started to occur. With this information in hand, you can then rule out potential causes, and can address the real root cause of the issue.</p><h1 class="blog-sub-title">Monitors Are Not All Created Equally</h1><p>There are several competing products on the market that all provide similar functionality. If you use MySQL, MariaDB, SQL Server, or Cloud databases like Amazon RDS, Amazon Aurora, Oracle Cloud, Google Cloud or Microsoft Azure, you should consider <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>. Although it comes packed with powerful features to make your monitoring effective as possible, that in itself is not the sole reason for choosing it. The most important feature is that Navicat Monitor provides agentless remote server monitoring. As such, it does not require any software to be installed on monitored servers, thus leaving their full resources available to process requests. Another benefit to utilizing agentless architecture is that Navicat Monitor can be accessed from anywhere via a web browser. With web access, you can easily and seamlessly keep track of your servers around the world, around the clock.</p><h1 class="blog-sub-title">Conclusion</h1><p>In this article, we examined the main functions of relational database monitors. In up-coming installments we'll learn more about how they work and explore some best practices for using monitoring software.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>?  You can try it for free for 14 days!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>How to Partition a MySQL Table Using Navicat</title>
<link>https://www.navicat.com/company/aboutus/blog/1755-how-to-partition-a-mysql-table-using-navicat.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>How to Partition a MySQL Table Using Navicat</title></head><body><b>Jul 23, 2021</b> by Robert Gravelle<br/><br/><p>In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1754-data-type-conversion-in-mysql-8.html" target="_blank">last week's blog</a> we learned about the potential uses and advantages to utilizing Database Partitioning when working with large data sets. In today's follow-up, we'll create a MySQL partition in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> using the HASH partitioning criteria.</p><h1 class="blog-sub-title">Launching the Partitioning Dialog in Navicat</h1><p>In Navicat, you'll find the Partition button on the Options tab of the Table Designer, at the bottom of the page:</p><img alt="partition_button (65K)" src="https://www.navicat.com/link/Blog/Image/2021/20210723/partition_button.jpg" height="586" width="507" /><p>Click this button to open the Partition dialog.</p><h3>Creating a HASH Partition on a Table</h3><p>The very first control on the Partition dialog is the <i>Partition By</i> drop-down:</p><img alt="partition_by_dropdown (17K)" src="https://www.navicat.com/link/Blog/Image/2021/20210723/partition_by_dropdown.jpg" height="171" width="369" /><p>The types of partitioning supported depend on the database type and version. Here are the options that you'll find in Navicat for MySQL 7:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>Range partitioning: Range (or Interval) partitioning is useful when organizing similar data - especially date and time data.  Hence, Range partitioning is ideal for partitioning historical data. </li>    <li>List partitioning: Explicitly maps rows to partitions based on discrete values. For example, all the customers for southern states could be stored in one partition while customers from northern states would be stored in different partitions. </li>    <li>Composite partitioning: Partitions on multiple dimensions, based on identification by a partitioning key. For example, you may decide to store data for a specific product type in a read-only, compressed format, and keep other product type data uncompressed. Composite partitioning also increases the number of partitions significantly, which may be beneficial for efficient parallel execution.</li>    <li>Round-robin partitioning: Assigns rows in a round-robin manner to each partition so that each partition contains a more or less equal number of rows and load balancing is achieved. In this case there is no partition key, so rows are distributed randomly across all partitions.</li>    <li>Hash partitioning: Randomly distributes data across partitions based on a hashing algorithm, rather than grouping similar data. Useful for times when it is not obvious in which partition data should reside, although the partitioning key can be identified. Hence, data is distributed such that it does not correspond to a business or a logical view of the data, as it does in Range partitioning.</li></ul>  <br><h4>Some Caveats</h4><p>In order to benefit from Partitioning, you'll want to make sure that:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>If you do supply the column on which to partition the table, that it is a part of every unique key in that table.</li><li>You are partitioning the table on the column(s) which is/are most commonly utilized in your queries. Otherwise, there will be no benefit from creating partitions.</li></ul><h4>Defining the Partition Details</h4><p>The Partition dialog supports many options, including subpartitions as well as the ability to manually create partition definitions. However, for a simple HASH partition, we only need to provide the partition criteria, (table column) and number of partitions:</p><img alt="partition_dialog (60K)" src="https://www.navicat.com/link/Blog/Image/2021/20210723/partition_dialog.jpg" height="667" width="602" /><p>Click the OK button to create the partition in one easy step!</p><p>On the SQL Preview tab, you can view the SQL statement that was generated by Navicat:</p><pre>ALTER TABLE `sakila2`.`film` PARTITION BY HASH (actor)PARTITIONS 10(PARTITION `p0` MAX_ROWS = 0 MIN_ROWS = 0 ,PARTITION `p1` MAX_ROWS = 0 MIN_ROWS = 0 ,PARTITION `p2` MAX_ROWS = 0 MIN_ROWS = 0 ,PARTITION `p3` MAX_ROWS = 0 MIN_ROWS = 0 ,PARTITION `p4` MAX_ROWS = 0 MIN_ROWS = 0 ,PARTITION `p5` MAX_ROWS = 0 MIN_ROWS = 0 ,PARTITION `p6` MAX_ROWS = 0 MIN_ROWS = 0 ,PARTITION `p7` MAX_ROWS = 0 MIN_ROWS = 0 ,PARTITION `p8` MAX_ROWS = 0 MIN_ROWS = 0 ,PARTITION `p9` MAX_ROWS = 0 MIN_ROWS = 0 );</pre><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we created a MySQL partition in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> using HASH partitioning criteria.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>?  You can try it for free for 14 days!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Data Type Conversion in MySQL 8</title>
<link>https://www.navicat.com/company/aboutus/blog/1754-data-type-conversion-in-mysql-8.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Data Type Conversion in MySQL 8</title></head><body><b>Jul 9, 2021</b> by Robert Gravelle<br/><br/><p>Any time that you categorize data into different types, the need to convert from one data type to another is inevitable. Off the top of my head, a common use case is to process variables that were passed in from a web form via a query parameter or POST request body. Serializing data in order to send it across the network tends to coerce all variables into strings.  As such, they often need to be converted into a more appropriate data type, such as a number, date, or what-have-you. </p>   <p>In relational databases, reasons for converting one data type to another include porting data from one database type to another, changing the data type of a column, or temporarily switching between data types for evaluation. In MySQL, we can convert between data types using the CAST() and CONVERT() functions. In today's blog, we'll learn how to employ both functions using examples to illustrate their usage.</p>  <h1 class="blog-sub-title">What's the Difference?</h1><p>Both CAST() and CONVERT() can change data types in MySQL. Since the two are so similar, many SQL newbies (and some more experienced users!) wonder what the difference is. The main difference is that CONVERT() can also convert the character set of data into another character set. CAST() cannot be used to change character sets. Hence, CAST() should be your <i>goto</i> conversion function, unless you need to convert a character set.</p><h1 class="blog-sub-title">The CAST() Function</h1><p>MySQL CAST() accepts two inputs:</p> <ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>the data to be typecasted </li><li>the data type (decimal, char, etc.) to which you want to convert this data. You can cast data into BINARY, CHAR, DATE, DATETIME, TIME, DECIMAL, SIGNED, UNSIGNED data types.</li></ul><p>Here's the syntax:</p><pre>CAST(data as data_type)</pre><h3>An Almost Real-life Example</h3><p>One useful application of the CAST() function is to make a very large data type less unwieldy (more wieldy?). The following query returns information about a particular film in the MySQL Sakila Sample Database.  One of the columns - description - is a text field.  That means that it can store a huge amount of text! We can use CAST() to truncate the description to 100 characters, so that we don't get a whole book about the movie:</p><img alt="cast_example (60K)" src="https://www.navicat.com/link/Blog/Image/2021/20210709/cast_example.jpg" height="319" width="544" /><p><i>Speaking of the Sakila Sample Database, did you know that it's named after MySQL's dolphin mascot? It was chosen from a huge list of names suggested by users in a "Name the Dolphin" contest. The winning name was submitted by Ambrose Twebaze, an Open Source software developer from Eswatini (formerly Swaziland), Africa.</i></p><h1 class="blog-sub-title">The CONVERT() Function</h1><p>The CONVERT() function's syntax is similar to CAST(), but the expression and result type are supplied in a slightly different format.  One way is to supply two separate arguments:</p><pre>CONVERT(expr, data_type)</pre><p>Other than that, the data_type parameter can be any of the same types that are supported by the CAST() function.</p><h3>An Not Quite Real-life Example</h3><p>Since the major difference between CAST() and CONVERT() is that the latter can the character set of a column into a different one, let's show that in action. </p><p>The first thing to be aware of is that the syntax is a little different for converting character sets. In that case, we need to add the USING keyword between the expression and character set:</p><pre>CONVERT(expr USING charset);</pre><p>In <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> (or <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>), we can see a table's character set and collation on the Info Pane:</p><img alt="film_table_info (34K)" src="https://www.navicat.com/link/Blog/Image/2021/20210709/film_table_info.jpg" height="808" width="330" /><p>With that in mind, we could apply the CONVERT() function to the previous query to convert the description field from UTF-8 to Latin1. In case you're curious, the difference between the two is that, in latin1, each character is exactly one byte long, while, in utf8, a character can consist of more than one byte. Consequently utf8 has more characters than latin1. Moreover, the characters they do have in common aren't necessarily represented by the same byte/bytesequence. </p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we saw how to use CAST() to convert data into a different type and how to convert between character sets using CONVERT(). To reiterate, CAST() should be your <i>goto</i> conversion function. CONVERT() is better suited for switching between character sets.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>? You can try it for 14 days completely free of charge for evaluation purposes.</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Get the Maximum Value across Columns</title>
<link>https://www.navicat.com/company/aboutus/blog/1752-get-the-maximum-value-across-columns.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Get the Maximum Value across Columns</title></head><body><b>Jun 30, 2021</b> by Robert Gravelle<br/><br/><p>The MAX() function is often used to return the largest value of a given column. It's not picky about types, so the column may contain salaries, best before dates, or last names. The question is, can the MAX() function also find the highest value across multiple columns? The short answer is Yes.  The longer explanation is that it depends on the database you're using. In today's blog, we'll explore a few ways to obtain the maximum value among two or more columns, either using the MAX() function, or an even better alternative.</p><h1 class="blog-sub-title">The MySQL Solution</h1><p>If you're working with MySQL, you can combine MAX() with the GREATEST() function to get the biggest value from two or more fields. Here's the syntax for GREATEST:</p><pre>GREATEST(value1,value2,...)</pre><p>Given two or more arguments, it returns the largest (maximum-valued) argument. If any argument is NULL, GREATEST returns NULL.</p><h3>An Example</h3><p>If you're going to look for the maximum value across fields, it helps to compare columns that contain similar data - apples against apples, so to speak. The <a class="default-links" href="https://www.mysqltutorial.org/mysql-sample-database.aspx" target="_blank">classicmodels database</a>'s products table contains two similar columns: "buyPrice" and "MSRP". Both store dollar figures as decimal data:</p><img alt="products_table (114K)" src="https://www.navicat.com/link/Blog/Image/2021/20210630/products_table.jpg" height="428" width="462" /><p>Ideally, the GREATEST() input parameters should be scalar values. As it happens, the MAX() function returns the largest value in a column!  Here's the Query and result in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>:</p><img alt="greatest_function (37K)" src="https://www.navicat.com/link/Blog/Image/2021/20210630/greatest_function.jpg" height="268" width="450" /><p>Not surprisingly, the MSRP contained the highest value. Otherwise, the company might want to consider a different vendor.</p><h1 class="blog-sub-title">Some Other Solutions</h1><p>For other database that don't support the GREATEST() function, there are ways to compare multiple columns using MAX().  It just takes a bit of creativity! Here are a few solutions, using SQL Server:</p><h3>UNION ALL</h3><p>The UNION ALL command combines the result set of two or more SELECT statements.  Unlike the UNION command, UNION ALL includes duplicates. In any event, either command may be utilized to combine different columns into one long result set. Its results may then be treated as a subquery from which the maximum value is derived: </p><pre>SELECT MAX(T.field) AS MaxOfColumnsFROM (    SELECT column1 AS field    FROM YourTable     UNION ALL    SELECT column2 AS field    FROM YourTable    UNION ALL    SELECT column3 As field    FROM YourTable) AS T</pre>    <p> Here's an example query against the Sakila Sample Database in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a> that includes both the rental and return date from the rentals table:</p><img alt="union_all (43K)" src="https://www.navicat.com/link/Blog/Image/2021/20210630/union_all.jpg" height="302" width="434" /><h3>Select MAX from VALUES </h3><p>The SQL VALUES keyword is not just for INSERTs. You can also SELECT from a list of values using the following syntax:</p><pre>select (values (1), (2), (3)) as temp(c)</pre><p>This statement can be expanded to serve our purpose as follows:</p><pre>SELECT (  SELECT MAX(myval)   FROM (VALUES (column1),(column2),(column3)) AS temp(myval)) AS MaxOfColumnsFROMYourTable</pre>   <p>We can use this template to serve as a basis for our query against the rentals table: </p><img alt="values (47K)" src="https://www.navicat.com/link/Blog/Image/2021/20210630/values.jpg" height="291" width="543" /><h1 class="blog-sub-title">Conclusion</h1><p>AS we saw here today, there are several ways to obtain the maximum value across multiple columns. These include using the GREATEST() function, or by getting a bit creative with the MAX() function.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>? You can try it for 14 days completely free of charge for evaluation purposes.</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Introduction to Inverse Indexes</title>
<link>https://www.navicat.com/company/aboutus/blog/1751-introduction-to-inverse-indexes.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Introduction to Inverse Indexes</title></head><body><b>Jun 25, 2021</b> by Robert Gravelle<br/><br/><p>Like most database developers, you've probably written your fair share of queries that search for that proverbial needle in a haystack of text or binary data. I know I have! Perhaps even more important than the SELECT statements that you write against the database are the indexes that it contains. To that end, an inverted index can go a long way towards making mounds of data accessible in an expeditious manner. In today's blog, we'll learn what inverted indexes are, and how to use them in your databases, using MySQL as an example. </p><h1 class="blog-sub-title">Forward Index versus Inverted Index </h1><p>Inverted indexes were actually invented decades ago, around the same time that much of the first AI and machine learning algorithms were born. However, it wasn't until recent increases in computing power that it became possible to make use of inverted indexes in traditional relational databases. Inverted indexes allows information in relational databases to be found much faster as well as allow queries to be far more complex and specific.</p><p>Unlike a regular (forward) index, that maps table rows to a list of keywords, an inverted index maps the keywords to their respective rows. Here's a side-by-side comparison:</p><table border="2">  <tr><th colspan="2" width="58%">Forward Index</th><th colspan="2" width="58%">Inverted Index</th></tr>  <tr><th width="100">Row</th><th>Keywords</th><th>Word</th><th>Rows</th></tr>  <tr><td valign="top">row1<br/>              row2<br/>          row3       </td>       <td valign="top">hello, sky, morning<br/>              tea, coffee, hi<br/>          greetings, sky       </td>       <td valign="top">hello<br/>              sky<br/>          coffee<br/>          hi<br/>          greetings</td>       <td valign="top">row1<br/>              row1, row3<br/>          row2<br/>          row2<br/>          row3</td>  </tr>  </table><p>Searching using a forward index is a slower process because the database engine has to look at the entire contents of the index to retrieve all pages related to a word. Meanwhile, searching via an Inverted Index is very fast because there are no duplicate keywords in the index and each word points directly to the relevant row(s).</p><h1 class="blog-sub-title">Inverted Indexes in MySQL</h1><p>MySQL's InnoDB engine implements Full-text indexes on text-based columns (CHAR, VARCHAR, or TEXT columns) to speed up queries and DML operations on data contained within those columns. Full-text indexes employ an inverted index design so that each keyword in the index points to a list of documents that the word appears in. It also supports proximity searches, whereby two or more words that occur within a certain number of words from each other may also be located, by storing position information for each word. </p><p>In Navicat database administration development tools, such as <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> and <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>, you can view a table's engine in the General Information panel: </p><img alt="table_properties (11K)" src="https://www.navicat.com/link/Blog/Image/2021/20210625/table_properties.png" height="811" width="264" /><p>Assuming that your table uses the InnoDB engine, you can assign a FULLTEXT index via the Index Type drop-down on the Indexes tab of the Table Designer.  Here's an example of the perfect column on which to add a FULLTEXT index - the  Description column on the Sakila Sample Database's Film table:</p><img alt="index_type (43K)" src="https://www.navicat.com/link/Blog/Image/2021/20210625/index_type.jpg" height="246" width="580" /><p>Text fields such as this are good candidates for an Inverted Index because there are so many words and phrases to search on: </p><img alt="film_description_column (118K)" src="https://www.navicat.com/link/Blog/Image/2021/20210625/film_description_column.jpg" height="535" width="576" /><h1 class="blog-sub-title">Conclusion</h1><p>Inverted indexes are a great way to speed up your queries while allowing them to be far more complex and specific. Just be aware that the indexing process takes longer than it does for forward indexes.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> or <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try both for 14 days completely free of charge for evaluation purposes!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Object Locking in Relational Database Transactions - Part 3</title>
<link>https://www.navicat.com/company/aboutus/blog/1750-object-locking-in-relational-database-transactions-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Object Locking in Relational Database Transactions - Part 3</title></head><body><b>Jun 22, 2021</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Avoiding and/or Minimizing Deadlocks</h1><p>In relational database systems (RDBMS), a deadlock is a situation where two concurrent transactions cannot make progress because each one is waiting for the other to release the lock. In Part 1 of this series, we we established what Object Locking is in Relational Databases, the different types of locks, and deadlocking.  Then, in Part 2, we compared the pros and cons of Pessimistic and Optimistic locking. In this installment, we'll be exploring a few causes of deadlocks, as well as strategies for avoiding, or at least, minimizing them.</p><h1 class="blog-sub-title">Inefficient Queries</h1><p>Deadlocks are unavoidable to some degree, but their infrequent occurrence does not automatically spell disaster, as long as one of the two transactions ends in a timely fashion. As it turns out, one of the most common sources of blocking issues are long and inefficient SQL statements that cause the database to "hang" while they run their course. These may be remedied in two steps:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Optimize poorly performing SQL statements so locks are released in the shortest time possible.</li><li>Identify whether the locks can be released before any long-running SQL statements are executed within the same session.</li></ul><p>For example, if locks are acquired due to a DELETE statement being executed and is immediately followed by a SELECT statement that performs a complete table scan, you should ascertain whether it may possible to execute a COMMIT statement between them. This should help the locks release earlier.</p><h1 class="blog-sub-title">Nested Transactions</h1><p>Another frequent cause of blocking issues are sleeping sessions that have lost track of the nesting level of the transaction. For example, if an application cancels an SQL statement or is timed out but doesn't issue a COMMIT or ROLLBACK statement, then resources could remain locked indefinitely. Some ways to deal with this issue include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Following any application error, submit an IF@@TRANCOUNT > 0 ROLLBACK TRAN statement within the application's error handler.</li><li>Include the SET XACT_ABORT ON statement in any stored procedures that start transactions - especially if they aren't cleaning up after an error. By doing so, should a run-time error occur, any open transactions will be aborted and control returned to the client.</li><li>If connection pooling is employed by an application that opens the connection and runs a few queries before returning the connection to the pool, then you may want to consider temporarily disabling connection pooling. By doing so, the DB Server connection is physically logged out, resulting in the server rolling back any open transactions.</li></ul><h1 class="blog-sub-title">Fetching Partial Results</h1><p>A lesser known source of deadlocks are applications that don't fetch all result rows in one go. This is a problem because, when a query has been sent to the server, applications must be able to fetch all result rows to completion. If this doesn't happen, locks can be kept on tables, which results in blocking for other users. Therefore, try to code your applications so that they fetch all of the rows they need rather than spread it out over several iterations.</p><h1 class="blog-sub-title">Conclusion</h1><p>Today's blog listed a few causes of deadlocks, as well as strategies for avoiding them and dealing with deadlocks when they do occur. Next week, we'll be moving on to an entirely new subject.</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Object Locking in Relational Database Transactions - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/1748-object-locking-in-relational-database-transactions-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Object Locking in Relational Database Transactions - Part 2</title></head><body><b>Jun 16, 2021</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Pessimistic versus Optimistic Locking</h1><p>Relational database systems (RDBMS) employ various locking strategies to enforce transaction ACID properties when modifying (e.g., UPDATING or DELETING) table records. On occasion, deadlock may occur when two concurrent transactions cannot make progress because each one is waiting for the other to release the lock. In Part 1 of this series, we we established what Object Locking is in Relational Databases, the different types of locks, and deadlocking.  In today's follow-up, we'll be comparing the pros and cons of Pessimistic and Optimistic locking. </p><h1 class="blog-sub-title">Pessimistic Locking</h1><p>With Pessimistic Locking, a resource is locked from the time it is first accessed in a transaction until the transaction is finished, making it inaccessible to other transactions during that time. In situations where most transactions simply read the resource and never update it, an exclusive lock may be overkill as it leads to more lock contention (deadlocks). Thinking back to the banking example from Part 1, the account would be locked as soon as it was accessed in a transaction. Any attempt to use the account in other transactions while it was locked would either result in the other process being delayed until the account lock was released, or that the process transaction would be cancelled and rolled back to the previous state. </p><h1 class="blog-sub-title">Optimistic Locking</h1><p>Using optimistic locking, a resource is not actually locked when it is first is accessed by a transaction. Instead, the pristine state of the resource is persisted. Other transactions are still able to access to the resource, making the possibility of conflicting changes a known risk. At commit time, when the resource is about to be updated in persistent storage, the state of the resource is read from storage again and compared to the state that was saved when the resource was first accessed in the transaction. If the two states differ, that means that a conflicting update was made, so the transaction is rolled back. In our banking example, the amount of the account would be saved when it's first accessed. If the transaction changed the account amount, the amount would be read from the store again just before the amount was about to be updated. If the amount had changed since the transaction began, the transaction would fail itself, otherwise the new amount would stand and be saved.</p><h1 class="blog-sub-title">Deciding Between Pessimistic and Optimistic Locking</h1><p>Now that we've covered what both types of locking are, the question becomes which to use. In most cases, Optimistic Locking is more efficient and offers higher performance. Meanwhile, Pessimistic Locking provides better integrity on the data, BUT, management of the lock is harder as there is a greater chance of encountering deadlocks. When choosing between pessimistic and optimistic locking, consider the following three guidelines:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Pessimistic locking is useful if there are a lot of updates and relatively high chances of users trying to update data concurrently.</li><li>Pessimistic locking is also more appropriate in applications that contain small tables that are frequently updated. In the case of these such "hotspots", conflicts are so likely that optimistic locking wastes effort in rolling back conflicting transactions.</li><li>Optimistic locking is useful when the possibility for conflicts is very low, i.e., there are many records but relatively few users, or very few updates and mostly read operations.</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>In this blog, we compared Pessimistic versus Optimistic locking.  In the next installment, we'll be exploring strategies for recovering from a deadlock situation.</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Object Locking in Relational Database Transactions</title>
<link>https://www.navicat.com/company/aboutus/blog/1735-object-locking-in-relational-database-transactions.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Object Locking in Relational Database Transactions - Part 1</title></head><body><b>Jun 8, 2021</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 1: Overview, Lock Granularity, and Deadlocks</h1><p>Recently, we've had a few blogs about database transactions and they enforce the the four ACID (Atomicity Consistency Isolation Durability) properties. In today's blog, we'll be taking a look at another mechanism employed by relational databases (RDBMS) to enforce ACID properties, namely, Object Locking.  Specifically, we'll learn what it is, what role(s) it plays in RDBMS transactions, and some of the side effects locking may cause. While Database Object Locking can be a fairly technical and complicated subject, we're going to break it down into layman's terms here and keep things as simple as possible.</p><h1 class="blog-sub-title">What Is Object Locking?</h1><p>Simply put, Object Locking is a way to prevent simultaneous access to data in a database, in order to avoid data inconsistencies. To illustrate how Object Locking works, imagine two bank tellers attempting to update the same bank account for two different transactions. Both tellers retrieve (i.e. copy) the account's record. Teller A applies and saves a transaction. Teller B applies a different transaction to his/her saved copy, and saves the result, overwriting the transaction entered by teller A. Now the record no longer reflects the first transaction, as if it had never even happened!</p><p>The fix is to lock the record whenever it is being modified by any user, so that no other user can alter it at the same time. This prevents records from being overwritten incorrectly, but allows only one record to be processed at a time, locking out other users who need to edit records at the same instance. Hence, anyone attempting to retrieve the same record for editing is denied write access because of the lock (depending on the exact implementation, they may be still be able to view the record in a read-only state). Once the record is saved (or edits are canceled), the lock is released. By preventing records from being saved so as to overwrite other changes, data integrity (the I in ACID) is preserved.</p><h1 class="blog-sub-title">Lock Granularity</h1><p>The above example demonstrated an instance of record-level locking. Now imagine if the two bank tellers above were serving two different customers, but both their accounts were contained in one ledger.  In that situation, then the entire ledger - or, one or more database tables - would need to be locked for editing.  As you can imagine, locking entire tables can lead to a lot of unnecessary waiting. If the tellers could remove one page from the ledger, containing the account of the current customer (plus a few other accounts, perhaps), then multiple customers can be serviced concurrently, provided that each customer's account is found on a different page than the others. If both customers have accounts on the same page, then only one may be serviced at a time. This is an analogy of page-level locking in a database.</p><p>There are four types of locks.  Here they are in increasing granularity:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>database locks</li><li>table locks</li><li>page locks</li><li>row locks</li></ul><h1 class="blog-sub-title">Lock Granularity and Deadlocks</h1><p>The utilization of granular locks creates the possibility for a situation called "deadlock". A deadlock may occur when incremental locking (locking one entity, then locking one or more additional entities) is utilized. To illustrate, my wife and I often transfer money between our personal accounts. If we were to each ask a teller to obtain our individual account information so we could transfer some money into the other spouse's  account, the two accounts would essentially be locked. Then, when our tellers attempted to transfer money into each other's accounts, they  would each find the other account to be "in use", forcing them to wait for the accounts to be freed up. Unknowingly, the two tellers are waiting for each other, and neither of them would be able complete their transaction until the other gives up and returns the account! Thankfully, various techniques have been devised to circumvent such problems. These will be addressed in the next installment.</p><h1 class="blog-sub-title">Going Forward</h1><p>In today's blog, we established what Object Locking is in Relational Databases, the different types of locks, and deadlocking. In the next installment, we'll review some collision resolution strategies, as well as pessimistic versus optimistic locking.</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>DBeaver vs Navicat - Part 2</title>
<link>https://www.navicat.com/company/aboutus/blog/1949-dbeaver-vs-navicat-part-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>DBeaver vs Navicat: Visual Appeal, Secure Connectivity, and NoSQL Support</title></head><body><b>Jun 3, 2021</b> by Robert Gravelle<br/><br/><img alt="header_2 (29K)" src="https://www.navicat.com/link/Blog/Image/2021/20210603/header_2.jpg" height="217" width="428" /><p>Both DBeaver and Navicat are Universal Database Tools, which means that they support all popular databases, including MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, and SQLite. Moreover, both are compatible with cloud databases as well, such as Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud and MongoDB Atlas.  But, as the saying goes, "the devil is in the details", so, while the two products may seem similar at first glance, a closer examination of each tool's Visual Appeal, Secure Connectivity, and NoSQL Support will reveal that the differences between the two far outnumber any apparent similarities.</p><h1 class="blog-sub-title">Visual Appeal</h1><p>Perhaps visual appearance is not the first thing one thinks of when considering application features, but how an application's GUI looks can tell us a lot about what kind of user experience (UX) it provides. Here are side-by-side screen captures of DBeaver and Navicat main screens in Windows:</p><table summary="GUI Comparison" width="100%">  <tr>    <td width="48%"><img width="100%" alt="DBeaver GUI" src="https://www.navicat.com/link/Blog/Image/2021/20210603/DBeaver_gui.jpg" /></td>    <td>&nbsp;</td>    <td width="48%"><img width="100%" alt="Navicat Premium GUI" src="https://www.navicat.com/link/Blog/Image/2021/20210603/02.Product_01_Premium_Windows_01_Mainscreen15.png" /></td>  </tr></table><p>There can be little doubt that both products have well designed GUIs. Having said that, Navicat's interface is cleaner and more intuitive, IMHO. Here are a few reasons why:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>All of the main actions are accessible via menu items at the top of the screen.</li><li>There is a large button toolbar for accessing other application screens and utilities.</li><li>Different Object types are identified by distinct icons, as seen in the left pane.</li></ul><h1 class="blog-sub-title">Secure Connectivity</h1><p>For business professionals, it is imperative to be able to connect securely to database instances. </p><p>DBeaver support configuration of standard (host, port, user credentials) as well as advanced connection properties. These include SSH tunnel, SOCKS proxy, and Shell commands to be executed before/after actual database connection.</p><p>Navicat establishes secure connections through SSH Tunneling and SSL to ensure that every connection is secure, stable, and reliable. Supported authentication methods include PAM authentication for MySQL and MariaDB, Kerberos and X.509 authentication for MongoDB, and GSSAPI authentication for PostgreSQL. Navicat provides more authentication mechanisms than DBeaver, and most of its competitors for that matter!</p><h1 class="blog-sub-title">NoSQL/BigData Database Support</h1><p>Due to their many and significant differences to traditional relational databases, NoSQL databases such as MongoDB present their own unique requirements.  </p><p>DBeaver has special extensions for MongoDB, as well as other document databases. NoSQL databases have an SQL interface so that you can work with them in the same way as you would relational databases.</p><p>Navicat is fully compatible with MongoDB out of the box.  Navicat also takes a different approach to working with NoSQL databases.  Rather than try to use MongoDB as an SQL database, it uses MongoDB's proper syntax for managing data, so that developers may utilize its full capabilities:</p><img alt="aggregate_query (84K)" src="https://www.navicat.com/link/Blog/Image/2021/20210603/aggregate_query.jpg" height="450" width="533" /><p>Moreover, Navicat can present NoSQL data in one of three ways, for working with documents in various capacities. They are:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"> <li>Grid view</li> <li>Tree view</li> <li>JSON view</li></ul><p>Grid View (pictured above) is the traditional tabular display that DBAs are most familiar with. It can handle any documents size, and supports advanced features like highlighting cells based on data types, column hiding and more.</p><p>Tree View shows your documents in a hierarchical view. All embedded documents and arrays are represented as nodes, which can be expanded or collapsed as needed:</p><img alt="tree_view.jpg" src="https://www.navicat.com/link/Blog/Image/2021/20210603/tree_view.jpg" /><p>You can also show your data as JSON documents, while documents can be added with the built-in validation mechanism which ensures your edits are correct.</p><img alt="json_view.jpg" src="https://www.navicat.com/link/Blog/Image/2021/20210603/json_view.jpg" /><h1 class="blog-sub-title">Conclusion</h1><p>In part 2 of this series on DBeaver vs. Navicat Premium, we compared the Visual Appeal, Secure Connectivity, and NoSQL Support of both products. As we saw, while both look similar to some degree, if one delves beneath the surface, there are some enormous differences between the two.</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>DBeaver vs Navicat: A Database Tools Showdown</title>
<link>https://www.navicat.com/company/aboutus/blog/1728-dbeaver-vs-navicat-a-database-tools-showdown.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>DBeaver vs Navicat: A Database Tools Showdown</title></head><body><b>June 2, 2021</b> by Robert Gravelle<br/><br/><img alt="header (18K)" src="https://www.navicat.com/link/Blog/Image/2021/20210602/header.jpg" height="167" width="430" /><p>In my early days as an IT consultant, I relied on a variety of open source tools to accomplish my tasks. My rational was that I was saving money by costs associated with commercial products. It would be a few years later that I would come to realize that commercial products can actually save time and money by streamlining and automating many of the common tasks that we tend to perform on a regular basis. </p>   <p>Database clients are a category of software that many developers shy away from spending money. The assumption here is that you don't need a lot of features to view database tables and perform queries against them.  That may be true, to a point, but if you find yourself doing a lot of database work, it may be high time to upgrade your DB client.  </p><p>I was recently introduced to a free universal database tool called DBeaver. Not knowing much about it, I thought that it might be informative to compare it to <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>. Let the showdown begin and may the best product win!</p><h1 class="blog-sub-title">About the Competitors</h1><p>In the left corner, we have the challenger: DBeaver. It's a free and open source universal database tool for developers, database administrators, or anyone who needs to work with data in a professional capacity. Written in Java and based on Eclipse platform, DBeaver uses the JDBC application programming interface (API) to interact with databases via a JDBC driver. For other databases such as NoSQL it relies on its own proprietary database drivers.</p><p>Like many open source tools, DBeaver was started in 2010 as a hobby project. It was meant to be free and open-source with an appealing UI. From its early days, the focus was to include the most frequently utilized features of database developers. The first official release was in 2011 on Freecode. It quickly became a popular tool in the open-source community.</p><p>In the right corner, we have the reigning champion: <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>. It is a commercial database development and design tool that allows you to simultaneously connect to multiple local and/or cloud databases from a single application. It was designed to meet the needs of a variety of audiences, from database administrators and programmers to various businesses/companies that serve clients and share information with partners. </p><p>The main goal of the initial version of Navicat was to simplify the management of MySQL instances. In 2008, <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> was the winner of the Hong Kong ICT 2008 Award of the Year, Best Business Grand Award and Best Business (Product) Gold Award. Navicat Premium was launched in 2009.  It combined all previous Navicat versions into a single product and could connect to all popular database types simultaneously, giving users the ability to perform data migration between different (heterogeneous) database types.</p><h1 class="blog-sub-title">Conclusion</h1><p>Now that we've introduced our participants, the next installment(s) will delve into each tool's feature set, and compare them for usability, performance, user ratings, and more! </p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="vvvvvhttps://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Using the SQL COUNT() Function with GROUP BY</title>
<link>https://www.navicat.com/company/aboutus/blog/1727-using-the-sql-count-function-with-group-by.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Using the SQL COUNT() Function with GROUP BY</title></head><body><b>May 28, 2021</b> by Robert Gravelle<br/><br/><p>Back in August of 2020, <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1650-the-many-flavors-of-the-sql-count-function" target="_blank">The Many Flavors of the SQL Count() Function</a> provided an overview of COUNT's many input parameter variations.  Another way to use the COUNT() function is to combine it with the GROUP BY clause. Using the COUNT() function in conjunction with GROUP BY is useful for breaking down counts according to various groupings. In today's blog, we'll learn how to group counts by different criteria by querying the Sakila Sample Database using Navicat Premium as our database client.</p>  <h1 class="blog-sub-title">Case 1: Actors Who Have Appeared In Most PG Movies</h1><p>By itself, the COUNT() function could tell us how many actors have appeared in PG Movies. However, if we wanted to know how many PG movies each actor has appeared in, we would need to add the actor_id to the GROUP BY clause. Recall that the GROUP BY clause groups records into summary rows and returns one record for each group. GROUP BY queries often include aggregate functions such as COUNT, MAX, SUM, AVG, etc.</p><p>Here is the SELECT statement, along with the query results, as shown in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>:</p><img alt="actors_who_have_appeared_in_most_pg_movies (105K)" src="https://www.navicat.com/link/Blog/Image/2021/20210528/actors_who_have_appeared_in_most_pg_movies.jpg" height="687" width="539" /><p>Notice that, when using GROUP BY, we can also order records by counts in descending order so that actors with the highest number of PG films in their filmography appear at the top of the results.</p><h1 class="blog-sub-title">Case 2: Number of Films Rented Per Day</h1><p>Applying the COUNT() function to the rental table can tell us how many movies have been rented in total. For more detailed counts, we need to turn to the GROUP BY clause. For example, we can break down counts by individual days by grouping by the rental_date. We also have to specify that the return_date must not be NULL, so that we don't count movies that have been rented but not yet returned.</p><img alt="num_of_films_rented (66K)" src="https://www.navicat.com/link/Blog/Image/2021/20210528/num_of_films_rented.jpg" height="564" width="383" /><p>In this case, ORDER BY is not required because results are automatically ordered by the grouped column, i.e., the rental date.</p><h1 class="blog-sub-title">Case 3: Number of Films Rented by Customer Per Month</h1><p>After a couple of relatively simple examples, it's time to ratchet up the level of difficulty a bit.  A GROUP BY clause can group on multiple fields to obtain even more fine-grained tabulations. Case in point, this query counts movie rentals for each customer, month, and year:</p><img alt="num_of_films_rented_by_customer_per_month (175K)" src="https://www.navicat.com/link/Blog/Image/2021/20210528/num_of_films_rented_by_customer_per_month.jpg" height="712" width="665" /><p>In this case, we included the ORDER BY to sort results by customers' last names (last_name) rather than by customer_id. A similar thing is being done with the months in that month names are displayed, but grouped and ordered according to the month number.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to group counts by different criteria by querying the Sakila Sample Database using Navicat Premium. As we saw here today, using the COUNT() function in conjunction with GROUP BY is useful for breaking down counts according to various groupings. In fact, you'd be hard-pressed to obtain the same data without combining COUNT() with GROUP BY!</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Important SQL Server Functions - Miscellaneous Functions</title>
<link>https://www.navicat.com/company/aboutus/blog/1718-important-sql-server-functions-miscellaneous-functions.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Important SQL Server Functions - Miscellaneous Functions</title></head><body><b>May 24, 2021</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 4: Miscellaneous Functions</h1><p>This last category of important SQL Server functions includes those that deal with nulls, conversion, and control flow. Far from leftovers, these functions are among some of the most useful you'll ever come across!</p><h1 class="blog-sub-title">COALESCE</h1><p>Anytime you select a column whose value is not mandatory, you're bound to encounter null values. That only makes sense, because null values represent absent or missing information.  Trouble is, nulls can reek havoc when included in calculations as well as other operations that one might perform on column data.</p><p>The COALESCE function accepts a list of arguments and returns the first one that does not contain a null value. Hence, SQL Server proceeds through each input parameter you provided until it either encounters one that isn't null or simply runs out of arguments. Here's its syntax:</p><pre>COALESCE(val1, val2, ...., val_n)</pre><p>It is commonplace to substitute a value of zero in the place of null. In some instances a different value may make more sense.  for example, the film table of the Sakila Sample Database contains a column named original_language_id for films that are not originally in English. We can employ COALESCE to set its value to <i>1</i> (the language_id for English) whenever a null is found:</p><img alt="coalesce (53K)" src="https://www.navicat.com/link/Blog/Image/2021/20210524/coalesce.jpg" height="516" width="702" /><h1 class="blog-sub-title">CONVERT</h1><p>Converting an output value into a specified data type is par for the course in database work. In SQL Server,  you can change the data type of a value to another using the CONVERT function. Its syntax is simple:</p><pre>CONVERT(type, value)</pre><p>One good reason to use CONVERT is for removing the time portion from a datetime field.  Here's a query that shows the same field in its original datetime format and without the time portion:</p><img alt="convert (73K)" src="https://www.navicat.com/link/Blog/Image/2021/20210524/convert.jpg" height="454" width="494" /><h1 class="blog-sub-title">IIF</h1><p>If/else statements are the most commonly used control flow structures in programming. SQL Server provides the power of the if/else statement to our queries in the form of the IIF function. Its syntax is: </p><pre>IIF(expression, value_if_true, value_if_false)</pre><p>We can utilize the IIF function to separate film lengths into three groups - Short, Medium, and Long - depending on their lengths. We'll categorize film lengths as follows:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Short: under 80 minutes</li><li>Medium between 80 and 120 minutes</li><li>Long: over 120 minutes</li></ul><p>We use the IIF() function to compare the length of the film to a given expression, and depending on the result, it returns a 1 or a NULL. If it returns a 1, it will be counted under that column heading (Short, Medium, or Long):</p><img alt="iif (38K)" src="https://www.navicat.com/link/Blog/Image/2021/20210524/iif.jpg" height="238" width="498" /><h1 class="blog-sub-title">Conclusion</h1><p>That brings us to the end of our series on the most important SQL Server Functions. As mentioned at the beginning of the series, it's helpful to know the exact function names and signatures for your specific database type because they can vary from provider to provider.  Case in point, the IIF function is only found in Microsoft products.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Important SQL Server Functions - Date Functions</title>
<link>https://www.navicat.com/company/aboutus/blog/1717-important-sql-server-functions-date-functions.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Important SQL Server Functions - Date Functions</title></head><body><b>May 14, 2021</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 3: Date Functions</h1><p>After twenty years in IT, I can confirm that dates and times can be notoriously difficult to work with.  Thankfully, modern relational databases like SQL Server provide a wealth of highly useful functions for this purpose. In today's blog, we'll explore some of the most popular ones.</p><h1 class="blog-sub-title">Getting the Current Date and Time</h1><p>Every programmatic language requires a way to get the current date and/or time. In SQL Server, there are a couple of ways to get the current date and time, via the CURRENT_TIMESTAMP and GETDATE() functions.  Both return the current date and time, in a 'YYYY-MM-DD hh:mm:ss.mmm' format:</p><img alt="getdate_and_current_timestamp (32K)" src="https://www.navicat.com/link/Blog/Image/2021/20210514/getdate_and_current_timestamp.jpg" height="263" width="404" /><p>So, why the two functions? As you can see in the above screenshot, GETDATE() requires parentheses, while CURRENT_TIMESTAMP does not.  That makes it ideal for setting the default value of auditing fields such as create and last modified columns: </p><img alt="current_timestamp (61K)" src="https://www.navicat.com/link/Blog/Image/2021/20210514/current_timestamp.jpg" height="287" width="773" /><h1 class="blog-sub-title">DATEPART</h1><p>Being able to get the current date and time is one thing, but sometimes you need to parse out individual date parts.  That's where the DATEPART() function comes in.  It returns a specified part of a date as an integer value. Here's its syntax:</p><pre>DATEPART(interval, date)</pre><p>The interval parameter has to be a specific date part or abbreviation. For example, the year can be expressed as either <i>year</i>, <i>yyyy</i>, or <i>yy</i>.  Here's the full list:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">  <li>year, yyyy, yy = Year</li>  <li>quarter, qq, q = Quarter</li>  <li>month, mm, m = month</li>  <li>dayofyear, dy, y = Day of the year</li>  <li>day, dd, d = Day of the month</li>  <li>week, ww, wk = Week</li>  <li>weekday, dw, w = Weekday</li>  <li>hour, hh = hour</li>  <li>minute, mi, n = Minute</li>  <li>second, ss, s = Second</li>  <li>millisecond, ms = Millisecond</li></ul>    <p>The following query that breaks the current date into its day, month, and year constituents:</p><img alt="datepart (48K)" src="https://www.navicat.com/link/Blog/Image/2021/20210514/datepart.jpg" height="236" width="541" /><h1 class="blog-sub-title">DATEFROMPARTS</h1><p>Date/time functions can also help us construct a date from disparate pieces of data.  It accepts a year, month, and day as input parameters and combines them to form a complete date:</p><pre>DATEFROMPARTS(year, month, day)</pre><p>Here's an example:</p><img alt="date_from_parts (30K)" src="https://www.navicat.com/link/Blog/Image/2021/20210514/date_from_parts.jpg" height="234" width="537" /><h1 class="blog-sub-title">DATEADD</h1><p>Adding and subtracting date/time intervals to and from a date is among the most common operations on dates. In SQL Server, the function to do that is DATEADD. It accepts three input parameters: the interval to add, how many, and the date to apply the intervals to:</p><pre>DATEADD(interval, number, date)</pre><p>The intervals accepted by DATEADD are identical to those of DATEPART, which we say earlier, so I won't repeat them here. Instead, let's take a look at a couple of examples of this important function.</p><p>Our first example adds three months to today's date:</p><img alt="date_add (41K)" src="https://www.navicat.com/link/Blog/Image/2021/20210514/date_add.jpg" height="236" width="546" /><p>To subtract an interval, just provide a negative number parameter:</p><img alt="date_subtract (33K)" src="https://www.navicat.com/link/Blog/Image/2021/20210514/date_subtract.jpg" height="238" width="543" /><h1 class="blog-sub-title">DATEDIFF</h1><p>Our last function return the difference between two date values, as expressed by the provided interval (see above for the full list of accepted values):</p><pre>DATEDIFF(interval, date1, date2)</pre><p>The following query returns the difference between two dates in months:</p><img alt="date_diff (31K)" src="https://www.navicat.com/link/Blog/Image/2021/20210514/date_diff.jpg" height="237" width="546" /><p>The first date would normally be considered to be the earlier one, so if the second date parameter precedes the first, then the DATEDIFF result is expressed as a negative value: </p><img alt="date_diff_hours (34K)" src="https://www.navicat.com/link/Blog/Image/2021/20210514/date_diff_hours.jpg" height="238" width="631" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we covered some of the most important SQL Server date and time functions. In the next and final installment, we'll be looking at miscellaneous functions.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Important SQL Server Functions - Numeric Functions</title>
<link>https://www.navicat.com/company/aboutus/blog/1716-important-sql-server-functions-numeric-functions.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Important SQL Server Functions - Numeric Functions</title></head><body><b>May 11, 2021</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 2: Numeric Functions</h1><p>Like most modern relational database offerings, SQL Server comes loaded with an impressive collection of built-in functions. While some functions are amazingly similar across the board, exact names and signatures may vary.  Therefore, it's a good idea to brush up on the SQL Server specific implementations of common SQL function. In <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/1715-important-sql-server-functions-string-utilities.html" target="_blank">part 1</a> of this series, we explored string functions. In today's installment, we'll be moving on to numerical functions, a category that is highly useful in the generation of statistics and calculated values!</p><h1 class="blog-sub-title">Abs</h1><p>These are not the Abs that people train to get ready for the beach. Rather, Abs is short for "Absolute". Hence, the Abs function accepts a numeric value as its argument and returns its absolute equivalent. In simpler terms, Abs returns the positive version of a given number, whether its positive or negative to begin with. Here's the function signature:</p><pre>ABS(inputNumber)</pre><p>In mathematics and statistics, deviation is a measure of difference between the value of a variable and some other value, often that variable's mean, or average. The deviation can either be signed or unsigned. The latter is where the Abs function comes in.  Here's a query against the ClassicModels Sample Database that shows the signed and unsigned (absolute) deviation of customers' credit limits, grouped by city:</p><img alt="abs (130K)" src="https://www.navicat.com/link/Blog/Image/2021/20210511/abs.jpg" height="676" width="543" /><h1 class="blog-sub-title">Round</h1><p>Another extremely popular numeric function is Round.  Rounding functions can vary quite a bit in their implementation; some only round to an integer, while others let you specify the number of decimal places to round to. SQL Server's Round function goes one step further, by accepting up to three arguments: </p><pre>ROUND(number, decimals, operation)</pre><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>number: a floating-point (decimal) number to be rounded</li><li>decimals: the number of decimal places to round number to</li><li>operation: an optional parameter that affects rounding operations. If 0 (or omitted), the function performs regular rounding, whereby a number of 5 or greater increases the next digit. Any value other than 0, causes the function to truncate the result to the number of decimals.</li></ul><p>It's extremely common to round currency values to 2 decimal places.  Here's our previous query with rounded figures:</p><img alt="round (111K)" src="https://www.navicat.com/link/Blog/Image/2021/20210511/round.jpg" height="610" width="612" /><h1 class="blog-sub-title">Ceiling</h1><p>The Ceiling function is similar to Round, except that it always rounds up to the next integer value. Hence, both 25.01 and 25.75 would be rounded up to 26.  Here's its syntax:</p><pre>CEILING(number)</pre><p>Let's apply the Ceiling function to our previous query by comparing the credit limits rounded to the nearest integer with those filtered through Ceiling:</p><img alt="ceiling (127K)" src="https://www.navicat.com/link/Blog/Image/2021/20210511/ceiling.jpg" height="610" width="639" /><h1 class="blog-sub-title">Floor</h1><p>Floor is the reverse of the Ceiling function; it always rounds a number down to the first integer that is less than or equal to that number. With positive numbers, Floor simply truncates decimals without altering the next highest integer. However, with negative numbers, it does increment the integer - downwards. For example, the floor of -0.5 is -1, as it is the first integer that is less than -0.5.</p><pre>FLOOR(number)</pre><p>Applying the Floor function to our example query without the use of Abs shows its effect on both positive and negative numbers:</p><img alt="floor (67K)" src="https://www.navicat.com/link/Blog/Image/2021/20210511/floor.jpg" height="609" width="637" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we covered some of the most important numerical functions of SQL Server. In the next installment, we'll be looking at Date functions.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p><br/><hr/><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial businesses. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Important SQL Server Functions - String Utilities</title>
<link>https://www.navicat.com/company/aboutus/blog/1715-important-sql-server-functions-string-utilities.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Important SQL Server Functions - String Utilities</title></head><body><b>May 7, 2021</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 1: String Utilities</h1><p>There are certain functions that seem to come up in every programming language.  Although SQL differs from your typical procedural programming language like C# or Java in many ways, it too comes equipped with an impressive assortment of built-in functions.  These may be applied to Char, Varchar, and Text data types. Each database vendor does not implement functions in exactly the same way, so it pays to familiarize yourself with functions that are specific to the database you work with. In this series, we'll be taking a look at a few important SQL functions, as implemented by SQL Server. Today's blog will tackle string functions, while subsequent installments will explore numerical, date functions, and more!</p><h1 class="blog-sub-title">Len</h1><p>One of the most useful string functions is one that returns its length in characters (including spaces and punctuation). In Microsoft products, there is a long tradition of calling this function "Len". Here's the function signature:</p><pre>LEN(inputString)</pre><p>As an example, we'll execute a real query against the Sakila Sample Database using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a> as our database client. The query selects the top 10 longest titles from the film table in descending order:</p><img alt="len (63K)" src="https://www.navicat.com/link/Blog/Image/2021/20210507/len.jpg" height="442" width="332" /><h1 class="blog-sub-title">Trim</h1><p>Looking to trim some fat off of a string?  Then the trim function is for you! It eliminates excess spaces and tabs from the beginning and end of a string that we pass in as its argument. Here's the signature for trim:</p><pre>TRIM(inputString)</pre><p>We can use trim to find out if any of our film titles contain any leading or trailing spaces by comparing the length of the trimmed title to what's there currently: </p><img alt="trim (68K)" src="https://www.navicat.com/link/Blog/Image/2021/20210507/trim.jpg" height="328" width="617" /><h1 class="blog-sub-title">Concat</h1><p>In programming, the combining of strings is known as concatenation.  Hence, the concat function combines two or more strings that we pass in as its arguments. Here's its signature:</p><pre>CONCAT(string1, string2, ...., string_n)</pre><p>The concat function is really useful to format multiple columns together in a way that works for you and your users.  The following query combines the ID, title, and release year for each film and separates them using commas:</p><img alt="concat (83K)" src="https://www.navicat.com/link/Blog/Image/2021/20210507/concat.jpg" height="542" width="547" /><h1 class="blog-sub-title">Upper &amp; Lower</h1><p>These two counterpart functions take a string argument and return the same string but with all its characters cast to uppercase and lowercase, respectively.</p><pre>UPPER(inputString)</pre><p>To show the effects of the Upper &amp; Lower functions, we can show film titles in their original case and altered through each function: </p><img alt="upper_and_lower (125K)" src="https://www.navicat.com/link/Blog/Image/2021/20210507/upper_and_lower.jpg" height="545" width="560" /><h1 class="blog-sub-title">Working with Functions in Navicat</h1><p>One of the features of Navicat's SQL Editor is auto-completion. As soon as you begin to type a word, a list of suggestions comes up that includes all database objects, including schema, table/view, column, procedure, and, of course, function names:</p><img alt="autocomplete (20K)" src="https://www.navicat.com/link/Blog/Image/2021/20210507/autocomplete.png" height="223" width="521" /><p>Once a function (or stored procedure) is selected, input parameters are highlighted for entry. If there are more than one, each parameter is tabbable for quick access:</p><img alt="input_params (6K)" src="https://www.navicat.com/link/Blog/Image/2021/20210507/input_params.png" height="85" width="416" /><h1 class="blog-sub-title">Conclusion</h1><p>In this first instalment of this series on Important SQL Server Functions, we looked at several useful string utility functions, including Len, Trim, Concat, Upper, and Lower.  Next time, we'll be moving on to numeric functions.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Iterate over Query Result Sets Using a Cursor</title>
<link>https://www.navicat.com/company/aboutus/blog/1714-iterate-over-query-result-sets-using-a-cursor.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Iterate over Query Result Sets Using a Cursor</title></head><body><b>May 4, 2021</b> by Robert Gravelle<br/><br/><p>Being a transactional programming language, SQL is designed to execute its work in an all-or-nothing capacity. Meanwhile, procedural programming languages such as C# and Java are often iterative in nature.  As such, they tend to loop over the same code until the stack is diminished and fully processed. Cursors are a notable exception to SQL's transactional approach. Like WHILE loops, cursors allow programmers to process each row of a SELECT result set individually by iterating over them.  While many SQL purists shun cursors out of disdain or fear, they have their place in database development and are well worth learning.  To that end, today's blog will describe when and how to use cursors within your stored procedures.</p><h1 class="blog-sub-title">Cursors Defined</h1><p>As mentioned above, a database cursor is a special control structure that enables traversal over the records in a database in order to process individual rows of a query result set for sequential processing. In Stored Procedures, a cursor makes it possible to perform complex logic on a row by row basis.</p><p>Cursors have three important properties:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Asensitive: The server may or may not make a copy of its result table.</li><li>Read-only: The data may not be updated.</li><li>Nonscrollable: Can be traversed only in one direction and cannot skip rows.</li></ul><h1 class="blog-sub-title">How to Use a Cursor</h1><p>Using a cursor within a stored procedure is a four step process:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Declare a cursor.</li><li>Open a cursor.</li><li>Fetch the data into variables.</li><li>Close the cursor when done.</li></ul><h3>Declare a Cursor</h3><p>The following statement declares a cursor and associates it with a SELECT statement that retrieves the rows to be traversed by the cursor:</p><pre>DECLARE cursor_name CURSOR FOR select_statement</pre><h3>Open a Cursor</h3><p>The following statement opens a previously declared cursor.</p><pre>OPEN cursor_name</pre><h3>Fetch the Data into Variables</h3><p>This statement fetches the next row for the SELECT statement associated with the specified cursor (which must be open) and advances the cursor pointer. If a row exists, the fetched columns are stored in the named variable(s). The number of columns retrieved by the SELECT statement must match the number of output variables specified in the FETCH statement.</p><pre>FETCH [[NEXT] FROM] cursor_name INTO var_name [, var_name] ...</pre><h3>Close the Cursor When Done</h3><p>This statement closes the cursor. An error occurs if the cursor is not open.</p><pre>CLOSE cursor_name</pre><h1 class="blog-sub-title">A Practical Example</h1><p>Here's the definition of a stored procedure (shown in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>) that employs a cursor to generate a list of emails for all staff members in the Sakila sample database:</p><img alt="cursor_definition (81K)" src="https://www.navicat.com/link/Blog/Image/2021/20210504/cursor_definition.jpg" height="525" width="508" /><p>Within the <i>getEmail</i> LOOP, the cursor iterates over the email list, and concatenates all emails separated by a semicolon (;). The <i>finished</i> variable informs the cursor to terminate the loop when there was no email fetched. Here is the value of the <i>emailList</i> after execution of the stored procedure:</p><img alt="cursor_result (22K)" src="https://www.navicat.com/link/Blog/Image/2021/20210504/cursor_result.jpg" height="132" width="356" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned when and how to use cursors within your stored procedures.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Copying a Table to a New Table using Pure SQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1713-copying-a-table-to-a-new-table-using-pure-sql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Copying a Table to a New Table using Pure SQL</title></head><body><b>Apr 28, 2021</b> by Robert Gravelle<br/><br/><p>There are many times where one needs to copy data from an existing table to a new one, for example, to back up data or to replicate data in one environment in another, as one might do for testing purposes. In SQL, one would typically use CREATE TABLE and SELECT statements as follows:</p><pre>CREATE TABLE new_table; SELECT SELECT col, col2, col3 INTO new_table FROM    existing_table;</pre><p>In the first statement, the database creates a new table with the name indicated in the CREATE TABLE statement. The structure of the new table is defined by the result set of the SELECT statement. Then, the database populates data with the results of the SELECT statement to the new table.</p><p>While the above procedure works perfectly well, there's an easier way to copy a table into a new one using a variation of the CREATE TABLE statement!  We'll learn how to use it here today.</p><h1 class="blog-sub-title">Introducing the CREATE TABLE AS SELECT Statement</h1><p>The CREATE TABLE statement provides a way to create one table from another by adding a SELECT statement at the end of the CREATE TABLE statement.  The full syntax for the statement is:</p><pre>CREATE TABLE new_tbl [AS] SELECT * FROM orig_tbl;</pre><p>It's a way to do, in one line of code, the exact same thing as we did using two separate statements above.</p><h1 class="blog-sub-title">Copying Partial Data</h1><p>Since the SELECT statement support all clauses that you'd usually employ in your SQL statements, including the WHERE and ORDER BY clauses, we can limit what we copy over by supplying a condition in our statement.  Here's the syntax for that:</p><pre>CREATE TABLE new_table SELECT col1, col2, col3 FROM    existing_tableWHERE    conditions;</pre><h1 class="blog-sub-title">Some Examples</h1><p>Here are a couple of examples using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> as our database client:</p><p>In its most basic form, the CREATE TABLE AS SELECT statement can copy a table "as-is" using a SELECT All (*).  Here's an example:</p><img alt="offices_bkp (82K)" src="https://www.navicat.com/link/Blog/Image/2021/20210428/offices_bkp.jpg" height="457" width="589" /><p>Here's a more complex example that only copies three columns from an orders table and limits rows to those with a recent <i>requiredDate</i>:</p><img alt="orders_copy (74K)" src="https://www.navicat.com/link/Blog/Image/2021/20210428/orders_copy.jpg" height="396" width="594" /><p>We can see that the new table only has three columns a selected:</p><img alt="orders_copy_data (55K)" src="https://www.navicat.com/link/Blog/Image/2021/20210428/orders_copy_data.jpg" height="421" width="318" /><h1 class="blog-sub-title">Conclusion</h1><p>There's no question that the CREATE TABLE AS SELECT statement offers a quick and easy way to copy data from a table into a new one.  Having said that, it does have its limitations.  For starters, not all relational databases support it. I know that MySQL and SQL Server do, but other databases may or may not.   </p><p>It is also worth noting that the CREATE TABLE AS SELECT statement just copies the table and its data. It does not copy other database objects such as indexes, primary key constraint, foreign key constraints, triggers, etc., associated with the table. To copy not only the data but also all database objects associated with a table, we should use two separate statements as follows:</p><pre>CREATE TABLE orders_copy LIKE orders;INSERT orders_copySELECT * FROM orders;</pre><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Using Transactions in Stored Procedures to Guard against Data Inconsistencies</title>
<link>https://www.navicat.com/company/aboutus/blog/1712-using-transactions-in-stored-procedures-to-guard-against-data-inconsistencies.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Using Transactions in Stored Procedures to Guard against Data Inconsistencies</title></head><body><b>Apr 20, 2021</b> by Robert Gravelle<br/><br/><p>In the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1711-understanding-database-transactions.html" target="_blank">Understanding Database Transactions</a> blog, we leaned how transactions are a fantastic way to guard against data loss and inconsistencies by guaranteeing that all operations performed with a transaction succeed or fail together. In today's follow-up, we'll learn how to employ a transaction within a stored procedure in order to ensure that all tables involved remain in a consistent state.</p><h1 class="blog-sub-title">About the sp_delete_from_table Stored Procedure</h1><p>If you've read any of my previous blog articles, you probably know that I often illustrate new concepts using the Sakila Sample Database. And why not? It was developed specifically as a learning database for MySQL.  If you aren't already aware, the Sakila Sample Database contains data pertaining to a fictitious movie rental store chain. Besides tables and views, you'll also find user functions, triggers, queries, and stored procedures that illustrate most commonly used database objects and tasks.</p><p>One of the stored procedures that is especially relevant to this blog is sp_delete_from_table. It accepts three input parameters as follows: </p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>@table: the name of the table from which to delete rows.</li><li>@whereclause: the criteria for identifying which rows to delete.</li><li>@delcnt: how many rows we expect to be deleted.</li></ul><p>The procedure returns the @actcnt (bigint) output parameter that contains the number of rows that were actually deleted. </p><p>Here's the full definition as shown in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>:</p><img alt="delete_from_table_stored_proc (163K)" src="https://www.navicat.com/link/Blog/Image/2021/20210420/delete_from_table_stored_proc.jpg" height="754" width="818" /><h1 class="blog-sub-title">Important Transaction Statements</h1><p>Relational databases provide us with several important statements to control transactions:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>To start a transaction, use the BEGIN TRANSACTION statement. Both START or BEGIN WORK are aliases of the BEGIN TRANSACTION.  You'll find it on line 17 of the sp_delete_from_table procedure.</li><li>To commit the current transaction and make its changes permanent, use the COMMIT statement.  That happens on line 32 of the procedure.</li><li>To roll back the current transaction and cancel its changes, use the ROLLBACK statement. There are a couple of situations where that comes up in the code:<ol><li>If the statement would have deleted all rows in the table, a message is displayed and the transaction is rolled back on line 26.</li><li>Should the number of rows deleted not match the number that you expected, again, a message is displayed and the transaction is rolled back.  That happens on line 38.</li></ol></li><li>To disable or enable the auto-commit mode for the current transaction, use the SET autocommit statement. By default, some databases, such as MySQL, run with autocommit mode enabled by default. This means that, when not otherwise inside a transaction, each statement is atomic, as if it were surrounded by START TRANSACTION and COMMIT. You cannot use ROLLBACK to undo the effect. However, if an error occurs during statement execution, the statement is rolled back. Since most of the work takes place within a transaction in the sp_delete_from_table procedure, the SET autocommit statement is not needed.</li></ul><h1 class="blog-sub-title">Testing a Transaction Rollback</h1><p>Since we know that the sp_delete_from_table procedure will abort if the expected count does not match the actual number of rows deleted, we can test for rollbacks by either making sure that our @whereclause criteria would delete every row in the table or by simply providing a @delcnt value that we know won't match. Let's try the latter.</p><p>In Navicat, we can run a stored procedure from the editor via the Execute button.  Clicking it causes a dialog to come up which accepts input parameters (output params may be ignored):</p><img alt="input_dialog (23K)" src="https://www.navicat.com/link/Blog/Image/2021/20210420/input_dialog.jpg" height="216" width="418" /><p>After the procedure terminates, we can see output messages in the Message tab.  We can see that it has been rolled back as expected:</p><img alt="proc_result (33K)" src="https://www.navicat.com/link/Blog/Image/2021/20210420/proc_result.jpg" height="169" width="496" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to employ a transaction within a stored procedure in order to ensure that all tables involved remain in a consistent state, no matter the outcome.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p><br/><br/><hr /><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial organizations.  In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Understanding Database Transactions</title>
<link>https://www.navicat.com/company/aboutus/blog/1711-understanding-database-transactions.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Understanding Database Transactions</title></head><body><b>Apr 16, 2021</b> by Robert Gravelle<br/><br/><p>Atomicity Consistency Isolation Durability, or "ACID", was coined by Andreas Reuter in 1983.  It's a concept in database management systems (DBMS) that identifies a set of standard properties used to guarantee the reliability of a database. ACID properties ensure that all database transactions remain accurate and consistent, and support the recovery from failures that might occur during processing operations. As such, it is implemented by nearly all Relational Databases.</p><p>As it turns out, DBMS that offer support for transactions enforce the four ACID properties automatically. In today's blog, we'll learn how transactions do that.  In up-coming articles, we'll look at how to use transactions in our stored procedures to guard against data inconsistencies.</p><h1 class="blog-sub-title">Transactions Explained</h1><p>Before we can employ transactions within our own stored procedures, it might help to understand what a transaction is.  Simply put, a transaction is a set of operations performed so all operations are guaranteed to succeed or fail as one unit.</p><p>As an example, consider the process of transferring money from a checking account to a savings account. This action is actually made up of two parts:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Withdraw funds from the checking account.</li><li>Deposit it into the savings account.</li></ul><p>Now, imagine what might happen if the power went out after the first step. I think that we can agree that there would be a problem if funds were deducted from the checking account but not added to the savings account! Just as you would not want this to happen with your financial transactions, updating one database table without updating tables that refer to it is just as undesirable. By employing a transaction, both the operations are guaranteed to succeed or fail together. That way, all entities involved remain in a consistent state.</p><h1 class="blog-sub-title">ACID and Transactions</h1><p>Transactions play a key role in enforcing all four ACID properties: Atomicity, Consistency, Isolation, and Durability. Let's see how they do that.</p><h3>Atomicity</h3><p>A database operation is considered atomic if it cannot be further broken down into separate operations. A transaction is also atomic because all of the operations that occur within a transaction either succeed or fail together. Should any single operation fail during a transaction, then everything is considered to have failed and must be undone (i.e. rolled back). </p><h3>Consistency</h3><p>One of the main perks of using a transaction is that it should leave the database in a consistent state, whether or not it completes successfully. This ensures that data modified by the transaction complies with all the constraints placed on the columns so that data integrity is maintained. </p><h3>Isolation</h3><p>Every transaction is isolated from other transactions. Therefore, a transaction shouldn't affect other transactions running at the same time. Stated another way, data modifications made by one transaction should be isolated from the data modifications made by other transactions. So, while a transaction can see data in the state it was in before another concurrent transaction modified it, as well as after the second transaction has completed, it cannot see any intermediate states.</p><h3>Durability</h3><p>Transactions help with Durability in a few ways: data modifications that take place within a successful transaction may be safely considered to be stored in the database regardless of whatever else may occur. As each transaction is completed, a row is entered in the database transaction log. Thus, in the event of a system failure that requires the database to be restored from a backup you can use this transaction log to get the database back to the state it was in after a successful transaction.</p><h1 class="blog-sub-title">Conclusion</h1><p>Transactions are a fantastic way to enforce the four ACID properties within your database(s). In today's blog, we learned how transactions do that.  In the next article, we'll look at how to use transactions in our stored procedures to guard against data inconsistencies.</p></body></html>]]></description>
</item>
<item>
<title>Preventing SQL Injection at the Database Level </title>
<link>https://www.navicat.com/company/aboutus/blog/1710-preventing-sql-injection-at-the-database-level.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Preventing SQL Injection at the Database Level </title></head><body><b>Apr 14, 2021</b> by Robert Gravelle<br/><br/><p>Many organizations make some effort to protect their data by implementing input validation within their applications.  As valuable as that is, it should be noted that many cyber attacks are aimed squarely at the database servers themselves, where application security does not come into play at all!  As a Database administrator (DBA) or Database Developer, you have tremendous power to reduce the risk of cyber attacks, and/or damage that may occur as a result, including from the most common form of cyber attack: SQL Injection.  In today's blog, we'll explore a few practices that can greatly reduce exposure to SQL Injection attacks.</p><h1 class="blog-sub-title">Place All Database Logic within Stored Procedures</h1><p>The more easily a malicious entity can pass in unfiltered SQL to the database server(s), the more susceptible your data will be to loss or theft. By placing all of your queries and data manipulation statements (DML) inside of stored procedures, you can make it much more difficult for hackers to issue DML statements.  </p><p>The following code example uses a CallableStatement, Java's implementation of the stored procedure interface, to execute the same database query:</p><pre>String custname = request.getParameter("customerName");try {  CallableStatement cs = connection.prepareCall("{call sp_getCustomerAccount(?)}");  cs.setString(1, custname);  ResultSet results = cs.executeQuery();  // ...result set handling} catch (SQLException se) {  // ...logging and error handling}</pre><h1 class="blog-sub-title">Whitelist Input Validation</h1><p>User supplied values are not the place to bind database entities such table, column names, or even the sort order indicator (ASC or DESC). Those values should come from your own SQL code, and not from user parameters. To target specific table and column names, parameter values should be mapped to the legal - i.e. expected - table and/or column names to prevent unvalidated user input ending up in the query.</p><p>Here is an example of table name validation:</p><pre>String tableName;switch(PARAM):  case "Value1": tableName = "clientTable";                 break;  case "Value2": tableName = "employeeTable";                 break;  ...  default      : throw new InputValidationException("unexpected value provided"                                                  + " for table name");</pre><p>For something simple like a sort order, one solution is to accept the user supplied input as a boolean, which is then utilized to select the safe value to append to the query. In fact, this is a common practice in dynamic query construction.</p><pre>public String myMethod(boolean sortOrder) {  String SQLquery = "some SQL ... order by Salary " + (sortOrder ? "ASC" : "DESC");  ...</pre><h1 class="blog-sub-title">Escape/Sanitize All User-Supplied Input</h1><p>Only when none of the above are feasible should user input escaping be employed. The reason that this defense is considered to be frail compared to other defenses is because there is no guarantee that it will prevent all SQL Injection across every possible situation.</p><p>It's vitally important that you match the user input escaping to your particular database type as every DBMS supports one or more character escaping schemes specific to certain kinds of queries. By escaping all user supplied input using the proper escaping scheme for the specific database you are using, the DBMS will not confuse that input with SQL code written by the developer, thus avoiding virtually any possible SQL injection vulnerabilities.</p><p>The <a class="default-links" href="https://owasp.org/www-project-enterprise-security-api/" target="_blank">OWASP Enterprise Security API (ESAPI)</a> is a free, open source, web application security control library that makes it easier for programmers to harden their applications against cyber attacks. The ESAPI libraries are designed to make it easy for programmers to retrofit security into existing applications as well.</p><p>To use an ESAPI database codec is pretty simple. An Oracle example looks something like:</p><pre>ESAPI.encoder().encodeForSQL( new OracleCodec(), queryparam );</pre><h1 class="blog-sub-title">Conclusion</h1><p>As a Database administrator (DBA) or Database Developer, you have tremendous power to reduce the risk of cyber attacks, and/or damage that may occur as a result thereof, including from the most common form of cyber attack: SQL Injection.  By following the practices outlines here today, you can greatly reduce exposure to SQL Injection attacks.</p><hr /><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial organizations. You can hire Rob by emailing him at rgconsulting(AT)robgravelle(DOT)com. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Atomicity in Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/1709-atomicity-in-relational-databases.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Atomicity in Relational Databases</title></head><body><b>Mar 30, 2021</b> by Robert Gravelle<br/><br/><p>Not so long ago, the word "atom" referred to a thing that could not be split any further.  Despite having discovered that atoms themselves are made up of even smaller particles, the term continues to retain its original meaning. With respect to relational databases, Atomicity means that operations (DMLs/DDLs, etc.) executed by the database will be atomic. The unit of atomicity usually provided by relational databases is a transaction. Why is this important? A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series of operations outright. In today's blog, we'll learn what Atomicity is and how to enforce it within your database instances.</p><h1 class="blog-sub-title">You Can't Spell ACID without Atomicity</h1><p>You've probably heard the term "ACID" thrown about with respect to relational databases.  It stands for "Atomicity Consistency Isolation Durability". It's a concept in database management systems (DBMS) that identifies a set of standard properties used to guarantee the reliability of a database. ACID properties ensure that all database transactions remain accurate and consistent, and support the recovery from failures that might occur during processing operations. As such, it is implemented by nearly all Relational Databases.</p><p>Here's where Atomicity comes in:</p><p>Say that you were performing a database UPDATE that will take 10 seconds to process all the rows in the table.  As the updates proceed, the power suddenly goes out! Once power is restored, you go to read the data, and discover that some of the rows were updated according to your SQL statement, and the rest of the rows were not. You've now got yourself a bit of a mess!</p><p>Luckily, this can't happen with today's modern databases, right?  Wrong.  </p><h1 class="blog-sub-title">Know Your Storage Engine</h1><p>In many cases, the type of database you use is not as important as the storage engine that's being employed.  The storage engine is the underlying software component that DBMS use to create, read, update and delete (CRUD) data.  Most databases support several different types of storage engines. For example, MySQL currently offers the following out of the box:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>InnoDB</li><li>MyISAM</li><li>Memory</li><li>CSV</li><li>Archive</li><li>Blackhole</li><li>NDB</li><li>Merge</li><li>Federated</li><li>Example</li></ul><p>You are not restricted to using the same storage engine for an entire server or schema. You can specify the storage engine at the table level.</p><p>There exists a wide variety of storage engines because certain storage engines are effective in certain operations and environments yet very ineffective in others. This is important to take into consideration and pick which storage engines will work best for your usage patterns.</p><p>Back to our example, if you were using the MyISAM engine, you could be in trouble, because MyISAM does not enforce atomicity. Hence, a single change can be partially applied, whereby some rows in the intended set are affected, but the rest of the set are not.  On the other hand, the InnoDB storage engine DOES ensure that any UPDATE will be applied to the complete set of rows you intended, or else it will apply to none of the rows if an error occurs or if the transaction is interrupted for some reason.</p><h1 class="blog-sub-title">Selecting a Storage Engine in Navicat</h1><p>Navicat makes selecting a storage engine for each table in your database easy via a drop-down on the Options tab within the Table Designer.  Here's what you'll find for MySQL in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>:</p><img alt="storage_engine (135K)" src="https://www.navicat.com/link/Blog/Image/2021/20210330/storage_engine.jpg" height="696" width="740" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned what database Atomicity is and how to enforce it within your database instances.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p><hr /><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial organizations. You can hire Rob by emailing him at rgconsulting(AT)robgravelle(DOT)com. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Using Group By and Order By in the Same Query</title>
<link>https://www.navicat.com/company/aboutus/blog/1708-using-group-by-and-order-by-in-the-same-query.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Using Group By and Order By in the Same Query</title></head><body><b>Mar 25, 2021</b> by Robert Gravelle<br/><br/><p>Both GROUP BY and ORDER BY are clauses (or statements) that serve similar functions; that is to sort query results. However, each of these serve very different purposes; so different in fact, that they can be employed separately or together.  And that is where things can get a little dicey if you are unsure of what you're doing.  In today's blog, we'll learn what each clause does and how to use them together for the ultimate control over your query output.  To do that we'll be using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> against the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a>.</p><h1 class="blog-sub-title">GROUP BY and ORDER BY Explained</h1><p>The purpose of the ORDER BY clause is to sort the query result by one or more columns. Meanwhile, the GROUP BY clause is used to arrange data into groups with the help of aggregate functions such as COUNT(), AVG, MIN() and MAX(). The way that it works is, if a particular column has the same values in different rows then it will amalgamate these rows into a group.</p><p>Let's look at an example of each.</p><p>Here's a query that displays the first and last names of all actors from the table actor, sorted by last name, followed by first name:</p><img alt="order_by (77K)" src="https://www.navicat.com/link/Blog/Image/2021/20210325/order_by.jpg" height="749" width="402" /><p>Now, here's another query that groups actors by the number of films that they have appeared in:</p><img alt="group_by (49K)" src="https://www.navicat.com/link/Blog/Image/2021/20210325/group_by.png" height="599" width="634" /><h1 class="blog-sub-title">Using Group By and Order By Together</h1><p>Notice that, in the preceding query, records are ordered by the actor_id field, which is what results are grouped on. If we wanted to order results using different - i.e. non-grouped - fields, we would have to add an ORDER BY clause. Here's the same query, but ordered by the number of films which each actor has appeared in, from most to least:</p><img alt="actors ordered by number of films (40K)" src="https://www.navicat.com/link/Blog/Image/2021/20210325/actors ordered by number of films.png" height="532" width="564" /><p>Notice that, once you include the Order By clause, the default group ordering is lost.  If you'd like to keep it, you can add grouped columns to the Order By field list: </p><img alt="actors_ordered_by_id_and_name (141K)" src="https://www.navicat.com/link/Blog/Image/2021/20210325/actors_ordered_by_id_and_name.jpg" height="818" width="565" /><h3>Points to Keep in Mind</h3><p>When combining the Group By and Order By clauses, it is important to bear in mind that, in terms of placement within a SELECT statement:</p><ul><li>The GROUP BY clause is placed after the WHERE clause.</li><li>The GROUP BY clause is placed before the ORDER BY clause.</li></ul><p>GROUP BY goes before the ORDER BY statement because the latter operates on the final result of the query. </p><h1 class="blog-sub-title">Bonus Section: the Having Clause</h1><p>You can filter the grouped data further by using the HAVING clause. The HAVING clause is similar to the WHERE clause, but operates on groups of rows rather than on individual rows. To illustrate how the HAVING clause works, we can use it to limit results to those actors who've appeared in more than ten films:</p><img alt="actors_in_more_than_10_films (144K)" src="https://www.navicat.com/link/Blog/Image/2021/20210325/actors_in_more_than_10_films.jpg" height="825" width="566" /><p>Navicat's SQL Editor greatly facilitates query writing thanks to features like syntax highlighting, reusable code snippets for control flow/DDL/syntax statements, as well as auto-complete. It can suggest everything from schema, tables, and columns to stored procedure and functions. Here is the HAVING keyword:</p><img alt="auto_complete_for_having (9K)" src="https://www.navicat.com/link/Blog/Image/2021/20210325/auto_complete_for_having.png" height="224" width="435" /><p>The Having Clause should be placed <i>after</i> the Group By clause, but <i>before</i> the Order By clause.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned what each clause does and how to use them together for the ultimate control over your query output using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p><hr /><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial organizations. You can hire Rob by emailing him at rgconsulting(AT)robgravelle(DOT)com. In his spare time, Rob has become an accomplished music artist with several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital releases</a> to his credit. </p></body></html>]]></description>
</item>
<item>
<title>Calculating Daily Average Date/Time Intervals in MySQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1707-calculating-daily-average-date-time-intervals-in-mysql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Calculating Daily Average Date/Time Intervals in MySQL</title></head><body><b>Mar 19, 2021</b> by Robert Gravelle<br/><br/><p>In <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/1700-calculating-average-daily-counts-in-sql-server" target="_blank">previous blog</a>, we tabulated the average daily counts for a given column in SQL Server using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>.  In today's follow-up, we're going to raise the difficulty factor slightly by calculating the daily average date/time interval that is based on start and end date columns. For demonstration purposes, I'll be working with MySQL using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>.</p><h1 class="blog-sub-title">Calculating Movie Rental Durations in Days</h1><p>In the Sakila Sample Database's rental table there are two date fields that represent a time interval: they are the rental and return dates. These, of course, store the date and time that a film was rented, and when it was returned.</p><img alt="rental_table (103K)" src="https://www.navicat.com/link/Blog/Image/2021/20210319/rental_table.png" width="951" height="664" ; style="height: auto;max-width: 800px;" /><p>With that in mind, suppose that we needed to write a query that shows the average length of movie rentals for each day.  The first step would be to calculate the length of all movie rentals.  Here's what that query would look like in Navicat:</p><img alt="rental_length_in_days_query (99K)" src="https://www.navicat.com/link/Blog/Image/2021/20210319/rental_length_in_days_query.jpg" height="827" width="542" /><p>To convert the rental_date from a datetime to a pure date we can use the DATE() function. It accepts any valid date or datetime expression.</p><p>The number of days is calculated using the MySQL DATEDIFF() function. It returns the number of days between two dates or datetimes. Navicat can help us use the DATEDIFF() function by providing auto-complete. When you start to type a word, a popup list appears with suggestions for everything from schemas, tables/views, columns, as well as stored procedures and functions. Here is the DATEDIFF() function in the suggestion list:</p><img alt="auto_complete_with_datediff_function (12K)" src="https://www.navicat.com/link/Blog/Image/2021/20210319/auto_complete_with_datediff_function.png" height="252" width="503" /><p>After you select a function, it gets inserted into your code at the cursor position with tabbable, color-coded, input parameters for quick entry:</p><img alt="datediff_function_with_color_coded_parameters (3K)" src="https://www.navicat.com/link/Blog/Image/2021/20210319/datediff_function_with_color_coded_parameters.png" height="32" width="362" /><br /><br /><div style="border:2px solid black;padding: 5px;"><h3>A Word about Times</h3><p>For shorter timeframes, you can use TIMEDIFF() instead of DATEDIFF(). It returns the time difference in seconds. For longer intervals, you can divide by 60 for minutes and another 60 for hours.</p></div><h1 class="blog-sub-title">Grouping Results by Day</h1><p>The next step is to group results by day.  This is done via the GROUP BY clause. It allows us to apply aggregate functions such as COUNT() and AVG() to the number of days that rentals were out for each rental_date. Here is the updated query with the GROUP BY clause:</p><img alt="average_rental_length_query (61K)" src="https://www.navicat.com/link/Blog/Image/2021/20210319/average_rental_length_query.png" height="918" width="579" /><p>You'll notice that I rounded the avg_days_rented to one decimal place.  Otherwise, we'd get 4 points of precision, which may be a bit much for our purposes!</p><h1 class="blog-sub-title">Conclusion</h1><p>Thanks to MySQL's many date/time functions, calculating the daily average date/time interval based on start and end date columns is made a lot easier than it otherwise might be. Moreover, Navicat's feature-rich SQL Editor further simplifies query writing by providing auto-complete for just about any database entity, including schemas, tables, columns, as well as functions and stored procedures.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p><br /><hr /><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial organizations. In his spare time, Rob has become an accomplished guitar player and has released several CDs and <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">digital singles</a>. You can hire Rob by emailing him at rgconsulting@robgravelle.com .</p></body></html>]]></description>
</item>
<item>
<title>Querying Multiple Tables without Joins</title>
<link>https://www.navicat.com/company/aboutus/blog/1706-querying-multiple-tables-without-joins.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Querying Multiple Tables without Joins</title></head><body><b>Mar 15, 2021</b> by Robert Gravelle<br/><br/><p>Normally, querying a normalized database necessitates joining tables together on one or more common fields. Otherwise, you risk generating a cartesian product. That is a result set whose number of rows equals those in the first table multiplied by the number of rows in the second table. So, if the input contains 1000 persons and 1000 phone numbers, the result consists of 1,000,000 pairs! Not good. Having said that, if you wanted to aggregate data from similar tables that are not directly related, you can do that using the UNION operator. In today's blog, we'll learn some of the finer points on using UNION, along with its close cousin, UNION ALL.</p><h1 class="blog-sub-title">UNION versus UNION ALL</h1><p>Some people think that Union and Union All are interchangeable. They are not. They differ in that Union removes duplicate rows or records, whereas Union All does not; instead, it just selects all the rows from the tables which meet the conditions of your queries' WHERE criteria and combines them into the results.</p><h1 class="blog-sub-title">Combining Results with UNION</h1><p>Here's a query that combines the results of two SELECT queries using the Sakila Sample Database:</p><p>Let's say that you wanted to find the names of all the actors and customers whose first name is the same as the first name of the actor with ID 8, but without returning the actor 8's details. Although there is more than one way to achieve this, one solution is to employ UNION ALL (UNION would work as well, since there are no duplicates between each result set). Here is the query that does the job, along with the results, in the <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> database development and admin client:</p><img alt="customer and actors with same first names (61K)" src="https://www.navicat.com/link/Blog/Image/2021/20210315/customer%20and%20actors%20with%20same%20first%20names.png" height="577" width="752" /><p>In the above query, the top SELECT fetches customers whose first name matches that of the actor with the actor_id of 8, while the bottom query fetches actors with the same first name as their fellow actor with ID 8.</p><p>If you are a Navicat user, you are already aware that it's editor is one of the best. It provides syntax highlighting, reusable code snippets, as well as auto-complete. When you start to type a word, a popup list appears with suggestions for everything from schemas, tables/views, columns, as well as stored procedures and functions. Here is the UNION operator:</p><img alt="union_operator (15K)" src="https://www.navicat.com/link/Blog/Image/2021/20210315/union_operator.png" height="208" width="435" /><p>A few final words about UNION and UNION ALL. It is crucial that all queries return the same number of columns or you'll get an error similar to the following:</p><img alt="error_message (5K)" src="https://www.navicat.com/link/Blog/Image/2021/20210315/error_message.png" height="66" width="524" /><p>That being said, the column types do not have to match across all SELECT statements.</p><p>And finally, keep in mind when employing UNION or UNION ALL that the column names for the result set are determined by the first SELECT.</p><h1 class="blog-sub-title">Conclusion</h1><p>The UNION and UNION ALL operators are the perfect tool for aggregating data from similar tables that are not directly related.</p><p>If you'd like to give <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> a try, you can test drive it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Three Ways to Perform Bulk Inserts</title>
<link>https://www.navicat.com/company/aboutus/blog/1705-three-ways-to-perform-bulk-inserts.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Three Ways to Perform Bulk Inserts</title></head><body><b>Mar 3, 2021</b> by Robert Gravelle<br/><br/><p>I recently wrote a node.js script to iterate over millions of files per day and insert their contents into a MySQL database. Rather than process one record at a time, the script stored file contents in memory and then ran an INSERT statement every 1000 files. To do that, I used the bulk insert form of the INSERT statement.  Depending on your particular requirements, you may opt to go with a different solution.  In today's blog, we'll go over a few alternatives.</p><h1 class="blog-sub-title">INSERT Statement Variation for Bulk Inserts</h1><p>The INSERT statement supports several syntax variations, one of which is for inserting multiple rows at the same time.  To do that, we simply need to enclose each value list in parentheses and separate them using a comma:</p><pre>INSERT INTO table_name (column_list) VALUES     (value_list_1),     (value_list_2),     ...     (value_list_n); </pre><p>Simple enough. Here's a sample statement shown in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>:</p><img alt="bulk_insert (65K)" src="https://www.navicat.com/link/Blog/Image/2021/20210303/bulk_insert.jpg" height="569" width="539" /><p>While the above statement is formatted for readability, you don't have to concern yourself with that when generating the SQL dynamically. As long as the syntax is semantically correct, it will work just fine.  Finally, note that 1000 is the maximum number of rows that can be inserted at one time using an INSERT statement.</p><h1 class="blog-sub-title">LOAD DATA INFILE</h1><p>Another option, for those of you who aren't thrilled about writing scripting code, is to use something like LOAD DATA INFILE. That's a MySQL-specific command, but most other database systems (DBMS) support something similar. It can import a variety of delimited file formats, including commas (CSV), Tabs (TDV), and others.</p><p>Here's the statement for importing data from the "c:\tmp\discounts.csv" file into the discounts table:</p><pre>LOAD DATA INFILE 'c:/tmp/discounts.csv'  INTO TABLE discounts  FIELDS TERMINATED BY ','  ENCLOSED BY '"' LINES TERMINATED BY '\n' IGNORE 1 ROWS; </pre><p>In the above statement, the IGNORE 1 ROWS option is employed to ignore headers.</p><p>I would have liked to have used this method for importing data, but the files that we were importing from utilized a highly specialized and complex format that required a lot of front-end logic.</p><h1 class="blog-sub-title">Using an Import Utility</h1><p>Still another approach would be to use an import utility such as Navicat's Import Wizard. It supports just about any format that you can imagine, including CSV, Excel, HTML, XML, JSON, and many other formats:</p><img alt="import_wizard_file_formats (49K)" src="https://www.navicat.com/link/Blog/Image/2021/20210303/import_wizard_file_formats.jpg" height="512" width="682" /><p>There is a screen for choosing the record delimiter, field delimiter, and text qualifier:</p><img alt="import_wizard_delimiters (43K)" src="https://www.navicat.com/link/Blog/Image/2021/20210303/import_wizard_delimiters.jpg" height="512" width="682" /><p>Navicat shows you the progress in real time:</p><img alt="import_wizard_progress (52K)" src="https://www.navicat.com/link/Blog/Image/2021/20210303/import_wizard_progress.jpg" height="434" width="550" /><p>Once your done, you can save all your settings for later use, which is not only useful for running the same on a regular basis, but it also allows you to automate it, so that imports happen without any additional intervention required on your part!</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we covered a few alternatives for performing bulk inserts into MySQL and other DBMS.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Joins versus Subqueries: Which Is Faster?</title>
<link>https://www.navicat.com/company/aboutus/blog/1704-joins-versus-subqueries-which-is-faster.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Joins versus Subqueries: Which Is Faster?</title></head><body><b>Feb 18, 2021</b> by Robert Gravelle<br/><br/><p>Joins and subqueries are both used to combine data from different tables into a single result set.  As such, they share many similarities as well as differences. One key difference is performance.  If execution speed is paramount in your business, then you should favor one over the other.  Which one?  Read on to find out!</p><h1 class="blog-sub-title">The Verdict</h1><p>I won't leave you in suspense, between Joins and Subqueries, joins tend to execute faster. In fact, query retrieval time using joins will almost always outperform one that employs a subquery.  The reason is that joins mitigate the processing burden on the database by replacing multiple queries with one join query. This in turn makes better use of the database's ability to search through, filter, and sort records. Having said that, as you add more joins to a query, the database server has to do more work, which translates to slower data retrieval times.</p><p>While joins are a necessary part of data retrieval from a normalized database, it is important that joins be written correctly, as improper joins can result in serious performance degradation and inaccurate query results. There are also some cases where a subquery can replace complex joins and unions with only minimal performance degradation, if any.</p><h1 class="blog-sub-title">Examples of Subqueries</h1><p>Sometimes you can't easily get at the data you want using a subquery. Here are a couple of examples using the Sakila Sample Database for MySQL and the <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> database development and admin client.</p><h3>Example #1: Using an Aggregate Function as Part of a Join Clause</h3><p>Most of the time, tables are joined on a common field. In fact, the common fields often share the same name in order to show that they refer to the same piece of information. But that is not always the case. In the following query, the customer table is joined to the latest (MAX) <i>create_date</i> so that query results pertain to the customer with the most recent sign up date:</p><img alt="subquery_in_join_clause (268K)" src="https://www.navicat.com/link/Blog/Image/2021/20210218/subquery_in_join_clause.jpg" height="827" width="753" /><p>In the above SELECT statement, a subquery is employed because you cannot use aggregate functions as part of a WHERE clause. This ingenious workaround circumnavigates that limitation!</p><h3>Example #2: Double Aggregation</h3><p>In this example, a subquery is employed to fetch an intermediary result set so that we can apply the AVG() function to the COUNT of movies rented. This is what I call a double aggregation because we are applying an aggregation (AVG) to the result of another (COUNT).</p><img alt="aggregate_of_an_aggregate (70K)" src="https://www.navicat.com/link/Blog/Image/2021/20210218/aggregate_of_an_aggregate.jpg" height="289" width="714" /><p>This particular query is quite fast - taking only 0.044 seconds - because the inner query returns a single (scalar) value. Usually, the slowest queries are those that require full table scans, which is not the case here.</p><h1 class="blog-sub-title">Conclusion</h1><p>While both joins and subqueries have their place in SQL statements, I would personally recommend that you always try to write your queries using joins exclusively. Only when you cannot fetch the data you're interested in without a subquery, should you introduce one.</p><p>Interested in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Database Optimization: an Overview</title>
<link>https://www.navicat.com/company/aboutus/blog/1703-database-optimization-an-overview.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Database Optimization: an Overview</title></head><body><b>Jan 22, 2021</b> by Robert Gravelle<br/><br/><p>Database optimization is a rather large and sprawling topic that encompasses a multitude of strategies for reducing database system response times. These are often tailored to the specific usage patterns of a database instance or cluster. For instance, in some cases, lightning fast queries might be a goal, whereas for some organizations, faster write times may be what's desired most.</p><p>Improving query response times may include activities such as:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>careful construction of queries</li><li>use of indexes</li><li>using analysis tools such as EXPLAIN</li></ul><p>In today's blog, we'll learn more about this vital topic in database administration.</p><h1 class="blog-sub-title">Optimization Activities Described</h1><p>As mentioned in the introduction, database optimization involves a number of strategies whose aim is to reduce database system response times. To that end, administrators (DBAs), developers and analysts may seek to decrease write times by working to improve the servers' data access methods and retrieval times through design techniques, statistical analysis and monitoring of system traffic. In this role, DBAs/analysts need to possess a strong knowledge of the structure of the data, the applications installed on the server and the impact varied tasks have on the database's overall performance.</p><p>Typically, database tuning and optimization can require a high degree of expertise, an understanding of execution plans, as well as the ability to write high-performing SQL. It also tends to be a highly time-consuming endeavor, because there can be a huge number of SQL statements to fine tune. Once you've determined which statements need tuning, you then need to refine your tuning approach to suit each and every query, as there is no one-size-fits-all solution.</p><h1 class="blog-sub-title">Tools of the Trade</h1><p>Query optimization is usually the best place to focus your efforts for two reasons: it's the easiest part of the optimization equation and tends to result in the most bang for your buck in terms of reward versus effort. Part of the reason that query optimization is the lowest hanging fruit is that there are a number of tools that you can use to aid you in your quest for improved database performance. Here are a few:</p><h3>Using EXPLAIN</h3><p>If you've got one query that runs consistently slow, then it probably needs to be optimized further. A good way to see what it needs is to use the EXPLAIN command. It returns a formatted description of the query optimizer's execution plan for the specified statement. You can use this information to analyze and troubleshoot the query.</p><p>By default, EXPLAIN output represents the query plan as a hierarchy whereby each level represents a single database operation that the optimizer defines to execute the query.  In Navicat database clients, there's a button in the SQL Editor that runs EXPLAIN. Results are displayed in an easy-to-read grid format:</p><img alt="explain_button (47K)" src="https://www.navicat.com/link/Blog/Image/2021/20210122/explain_button.png" height="592" width="1127" /><h3>Analyzing Query Performance using a Monitoring Tool</h3><p>You can also analyze your query performance using a tool like <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>. It has a Query Analyzer that shows information of all executing queries. Moreover, it can help identify slow queries and detect deadlocks, i.e. when two or more queries permanently block each other. </p><img alt="query_analyzer (125K)" src="https://www.navicat.com/link/Blog/Image/2021/20210122/query_analyzer.jpg" height="621" width="1023" /><h1 class="blog-sub-title">Conclusion</h1><p>Finally, if your DBMS supports query profiling, you can use it to measure query execution time. While perhaps not quite as powerful as the tools we saw here today, it might be worth a try.</p></body></html>]]></description>
</item>
<item>
<title>What is a Flat File Database?</title>
<link>https://www.navicat.com/company/aboutus/blog/1702-what-is-a-flat-file-database.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>What is a Flat File Database?</title></head><body><b>Jan 7, 2021</b> by Robert Gravelle<br/><br/><p>While you've almost certainly heard of relational and NoSQL databases, there is a better than even chance that you're completely unfamiliar with flat file databases. Flat file databases are indeed a real thing, but they don't get much love these days.  As we'll learn in today's blog, there is a better way to work with flat file databases than in years gone by.  In fact, if you use any of Navicat's database development and admin clients, you're in the ideal position to do so!</p><h1 class="blog-sub-title">History and Limitations</h1><p>Flat file databases have been around ever since the very first computers.  They are a type of database that stores data in a plain text file, whereby each line of the file holds one record, and fields are separated by delimiters - typically commas or tabs. As such, flat file databases share more commonality with a spreadsheet than a relational database.  Due to their simple structure, the "tables" represented within a flat file database support limited functionality, such as record and column sorting. </p><p>Flat file databases flourished as a back-end to applications. Their simple structure takes up less space than structured database files and work well for configuration data. If you have some programming savvy, you can find ODBC drivers for most languages for interfacing with flat file databases.  Unfortunately, most relational database clients cannot connect directly to a flat file database.  However, relational databases provide commands to import flat file databases and use them in a larger relational database.</p><h1 class="blog-sub-title">Importing a Flat file</h1><p>If the structure of a flat file database sounds familiar to you, it's because it is very similar to a CSV (Comma Separated Values), TSV (Tab Separated Values), or any DSV (Delimiter Separated Values) file.</p><p>Every relational database provides its own command(s) for importing data from a flat file. For example, MySQL provides the LOAD DATA INFILE statement. You have to create the database and table(s) first, but you only need to do that once for each data set. Once you've done that, LOAD DATA INFILE is very fast! Here's an example statement for importing a CSV (Comma Separated Values) file:</p><pre>LOAD DATA INFILE 'c:/path/to/file.csv'  INTO TABLE discounts  FIELDS TERMINATED BY ','  ENCLOSED BY '"' LINES TERMINATED BY '\n' IGNORE 1 ROWS; </pre><h1 class="blog-sub-title">Using Navicat Import</h1><p>Navicat's powerful Import utility is a wizard-driven process that helps you import a wide variety of formats from DSV, JSON, XML, and more.</p><img alt="import_formats (32K)" src="https://www.navicat.com/link/Blog/Image/2021/20210107/import_formats.jpg" height="399" width="505" /><p>It lets you choose your record delimiter, field delimiter, and text qualifier:</p><img alt="delimiter_screen (22K)" src="https://www.navicat.com/link/Blog/Image/2021/20210107/delimiter_screen.jpg" height="383" width="485" /><p>You get a full progress report as the import proceeds, including the number of tables and rows processed, along with errors encountered and time taken:</p><img alt="progress_report (43K)" src="https://www.navicat.com/link/Blog/Image/2021/20210107/progress_report.jpg" height="434" width="550" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned about flat file databases and how to import them into your relational database using native database commands as well as Navicat.</p><p>For a more in-depth look at Navicat's Import utility, I wrote the <a class="default-links" href="https://www.databasejournal.com/features/mysql/importing-xml-csv-text-and-ms-excel-files-into-mysql.html" target="_blank">Importing XML, CSV, Text, and MS Excel Files into MySQL</a> article a couple of years ago showing how to import data in a variety of formats using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>. </p><hr /><p>Rob Gravelle resides in Ottawa, Canada, and has been an IT Guru for over 20 years. In that time, Rob has built systems for intelligence-related organizations such as Canada Border Services and various commercial organizations. In his spare time, Rob has become an accomplished guitar player and has released <a class="default-links" href="https://www.amazon.com/s?k=Rob+Gravelle&i=digital-music&search-type=ss&ref=ntt_srch_drd_B001ES9TTK" target="_blank">several CDs</a>. You can hire Rob by emailing him at rgconsulting@robgravell.com .</p></body></html>]]></description>
</item>
<item>
<title>Identifying Columns with Missing Values in a Table</title>
<link>https://www.navicat.com/company/aboutus/blog/1701-identifying-columns-with-missing-values-in-a-table.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Identifying Columns with Missing Values in a Table</title></head><body><b>Dec 4, 2020</b> by Robert Gravelle<br/><br/><p>Sometimes a database administrator (DBA) needs to furnish a report on the number of missing values in a table or tables. Whether the goal is to show counts or row content with missing values, there are a couple of ways to go about it, depending on how flexible you want to be about it.  The first would be to construct a query against the table(s) in question, using information that you have about field names, data types, and constraints. The second, more elaborate, approach would be to write a stored procedure that fetches column info from the INFORMATION_SCHEMA.COLUMNS table.  In today's blog, we'll take a look at the non-generic approach, while next week's blog will address the stored procedure solution.</p><h1 class="blog-sub-title">Showing Nullable Columns</h1><p>Since not every field in a table can accept null values, it is helpful to inspect the table design and see which fields may contain nulls. In  <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> database development and admin clients, the Table Design identifies all mandatory columns using a checkbox under the <i>Not null</i> header. Hence, all columns whose checkbox is not checked may contain nulls. Those would be the fields that our query will focus:</p><img alt="nullable_columns (151K)" src="https://www.navicat.com/link/Blog/Image/2020/20201204/nullable_columns.jpg" height="497" width="904" /><p>One way to find fields with nulls is to build the query using the Query Builder tool. It lets us select many conditions from a menu, including "is null", "is not null", "is empty", "is not empty", etc...</p><img alt="query_designer (123K)" src="https://www.navicat.com/link/Blog/Image/2020/20201204/query_designer.jpg" height="728" width="958" /><p>Once built, we can insert the SQL directly into the Editor:</p><img alt="completed_query (55K)" src="https://www.navicat.com/link/Blog/Image/2020/20201204/completed_query.png" height="586" width="900" /><p>Here are all rows of a customers table which contain at least one column with a null value:</p><img alt="query_results (111K)" src="https://www.navicat.com/link/Blog/Image/2020/20201204/query_results.png" height="827" width="930" /><h1 class="blog-sub-title">Obtaining Stats on Filled and Empty Fields</h1><p>In cases where we only want statistics on filled versus empty fields, we can use the count() function to tally fields which are either filled or null. In the following query, percentages are expressed as a the number of rows which contain a null value for that particular field:</p><img alt="percentage_of_empty_rows (46K)" src="https://www.navicat.com/link/Blog/Image/2020/20201204/percentage_of_empty_rows.png" height="491" width="828" /><p>Likewise, we can count and shows null columns for a specific row, identified here by the <i>customerNumber</i>:</p><img alt="stats_for_customer_103 (60K)" src="https://www.navicat.com/link/Blog/Image/2020/20201204/stats_for_customer_103.png" height="481" width="874" /><p>In the above query, a CASE statement is employed to only include null values in the counts. This time, the percentage is showing how many of the total fourteen table columns contain nulls, rounded to 2 decimal places.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to query for missing values in a table or tables. Next week's blog will introduce a more generic approach using a stored procedure. In the meantime, here's a query for SQL Server to whet your appetite.  It fetches column metadata from the INFORMATION_SCHEMA.COLUMNS table in order to generate queries for every table:</p><img alt="query_generator (249K)" src="https://www.navicat.com/link/Blog/Image/2020/20201204/query_generator.jpg" height="624" width="934" /><p>The above query will return a list of select queries. We can then copy &amp; paste these to the Navicat Query Editor with a 'union' between the selects to find missing values in every table in a database!</p><img alt="generated_sql_statements (53K)" src="https://www.navicat.com/link/Blog/Image/2020/20201204/generated_sql_statements.jpg" height="158" width="863" /></body></html>]]></description>
</item>
<item>
<title>Calculating Average Daily Counts in SQL Server</title>
<link>https://www.navicat.com/company/aboutus/blog/1700-calculating-average-daily-counts-in-sql-server.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Calculating Average Daily Counts in SQL Server</title></head><body><b>Nov 20, 2020</b> by Robert Gravelle<br/><br/><p>Calculating average daily counts seems like something that would be done fairly often, and yet, I have never done it. I asked my wife, who is also a programmer of database-backed applications, and she never had the occasion to do so either! So, it is with great enthusiasm that I take on this challenge today.</p>    <p>To qualify what is meant by an "average daily count", for the purposes of this blog, it describes a monthly count of patients at a doctor's office, widgets manufactured, or products sold within a month.  The daily average is then calculated by dividing the monthly count by the number of days in a month so that we know how much the daily count contributed towards the monthly total.  For example, if a car dealership sold 10 Honda Civics in the month of May, then those 10 sales represent an average of 0.32 of one vehicle per day.  Meanwhile, if the dealership were to sell a whopping 50 Honda Civics in one month, then the daily average would soar to 1.61 Honda Civics per day.</p><h1 class="blog-sub-title">The Query</h1><p>Our SELECT statement will query the Sakila Sample Database to tabulate movie rentals for each month in the following format:</p><pre>ID| MONTH | MONTHLY_COUNT | AVG_DAILY_COUNT -------------------------------------------  1| Jan   | 152           | 10.3  2| Jan   | 15000         | 1611  3| Jan   | 14255         | 2177  1| Feb   | 4300          | 113  2| Feb   | 9700          | 782  3| Feb   | 1900          | 97 etc... </pre><p>The AVG_DAILY_COUNT column above increases the complexity of the query substantially because we need the obtain the monthly counts first. Therefore, the query consists of both inner and outer SELECT statements. Here is the inner query and results, sorted by year, month, and inventory_id:</p><img alt="inner_query (150K)" src="https://www.navicat.com/link/Blog/Image/2020/20201120/inner_query.jpg" height="897" width="690" /><h3>The Outer Query</h3><p>From that data, we can tabulate the average daily counts as follows:</p><img alt="outer_query (165K)" src="https://www.navicat.com/link/Blog/Image/2020/20201120/outer_query.jpg" height="834" width="883" /><p>I included the number of days in each month for reference, since it plays a key role in calculating the daily averages. It's also edifying to see how it's obtained, since the number of days in each month is required to calculate the average daily count. Here is that code isolated from the rest of the query:</p><pre>datediff(day,          datefromparts(rental_year, rental_month, 1),          dateadd(month, 1, datefromparts(rental_year, rental_month, 1))) days_in_month</pre><p>The datediff() function returns the number of days between the first day of the month to the first day of the following month.  The datefromparts() function creates a date from the rental_year and rental_month columns of the inner query.</p><p>We can see the same code in the calculation of the daily_avg:</p><pre>round(    cast(cnt as FLOAT) / cast(datediff(day,                               datefromparts(rental_year, rental_month, 1),                               dateadd(month, 1, datefromparts(rental_year, rental_month, 1))) as FLOAT                         ), 4) daily_avg</pre><p>Notice that both the Dividend (cnt) and divisor (days in month) have to be cast to a FLOAT.  Otherwise, decimals are discarded in the calculation.  We want to keep as much precision as possible until the end, where we round to four decimal places.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we calculated the average daily counts for a given column in SQL Server using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>.  Interested in giving Navicat for SQL Server a try? You can download it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Why MySQL (Still) Tops the List of Most Popular Database Platforms</title>
<link>https://www.navicat.com/company/aboutus/blog/1699-why-mysql-still-tops-the-list-of-most-popular-database-platforms.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Why MySQL (Still) Tops the List of Most Popular Database Platforms</title></head><body><b>Nov 16, 2020</b> by Robert Gravelle<br/><br/><p>Choosing between commercial and open source database offerings is not an easy one as many popular commercial databases are made available to developers and/or students at a greatly reduced cost or even for free. In other cases, the parent companies offer similar open source versions of their enterprise level products. </p><p>Judging by user polls, when looking at a snapshot of all database types - whether paid or free - an interesting trend emerges.  It seems that, overall, users of database systems (DBMS) have gone with an open source solution the vast majority of the time. And not just any open source solution, but MySQL specifically. <a class="default-links" href="https://www.mysql.com/products/community/" target="_blank">MySQL Community Edition</a> has held the number one spot on the <a class="default-links" href="https://www.explore-group.com/blog/the-most-popular-databases-2019/bp46/" target="_blank">list of Most Popular Database Platforms</a> for years now. Here are the top 5 databases - both commercial and FREE - for 2019 and their percentage of market share:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>MySQL: 52%</li><li>PostgreSQL: 36%</li><li>MS SQL Server: 34%</li><li>SQLite: 30%</li><li>MongoDB: 26%</li></ul><p>So why is MySQL so darned popular?</p><h1 class="blog-sub-title">It's Open Source</h1><p>As stated above, it's free, but MySQL has a lot more going for it than price. Another attractive feature is that MySQL is open source software. This allows it to be customized or modified according to users' needs. Moreover, many third-party tools and interfaces have been developed for MySQL due to the fact that there are no licensing fees required. One of which is of course <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>.</p><p>Since its inception in the mid 90s by a Swedish company, MySQL has been acquired by larger companies on a couple of occasions: Sun Microsystems acquired MySQL in 2008, then Oracle purchased it in 2010. Each time, there was much speculation that MySQL's free status would soon change. However, this has not been the case this far. At this time all indications are that Oracle will continue to offer MySQL Community Edition completely free of charge.</p><h1 class="blog-sub-title">Versatility</h1><p>MySQL is an extremely versatile database. It's portable enough for development use, and yet robust enough for the most mission critical applications. In fact, many of the world's largest and fastest-growing organizations, including Facebook, Google, Adobe, Alcatel Lucent and Zappos, rely on MySQL to save time and money powering their high-volume web sites, business-critical systems and packaged software.</p><h1 class="blog-sub-title">The Emergence of Web Applications </h1><p>Some people have proposed that the rise of web applications developed by small startups or individuals has contributed to the popularity of MySQL in numerous ways. Again, besides the price, the fact that MySQL uses basic SQL and is easily set up and configure have made it the choice to power many web apps.</p><h1 class="blog-sub-title">Community Support</h1><p>Another great feature of MySQL is the tremendous online community that support it. Freelance developers from all over the world are continually adding to the functionality and utility of the database platform. Paid support is also available from Oracle, at extra cost.</p><h1 class="blog-sub-title">Third Party Software</h1><p>MySQL's tremendous popularity has prompted many 3rd party vendors to create software for working with it. These include development and administration clients such as <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> and monitoring tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>. Although neither of these tools are free, they earn their keep by providing an all-inclusive front-end that features an intuitive and powerful graphical interface for database management, development, and maintenance. Designed to be powerful and yet easy-to-use, tools such as these save developers and administrators time and effort in everything that they do.</p><figure>  <figcaption>Navicat for MySQL Windows Edition</figcaption>  <img alt="Navicat for MySQL Windows Edition" src="https://www.navicat.com/link/Blog/Image/2020/20201116/MySQL_Windows_Mainscreen.png" style="max-width: 800px; height: auto" /></figure><figure>  <figcaption>Navicat Monitor Dashboard - Comfortable View</figcaption>  <img alt="Navicat Monitor Dashboard - Comfortable View" src="https://www.navicat.com/link/Blog/Image/2020/20201116/NavicatMonitor_Dashboard.png" style="max-width: 800px; height: auto"/></figure><h1 class="blog-sub-title">Conclusion</h1><p>With so many great features, as well as the proliferation of high end user tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> and <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>, <a class="default-links" href="https://www.mysql.com/products/community/" target="_blank">MySQL Community Edition</a> continues to enjoy the top spot of the relational database heap.</p></body></html>]]></description>
</item>
<item>
<title>Preventing the Occurrence of Duplicate Records</title>
<link>https://www.navicat.com/company/aboutus/blog/1698-preventing-the-occurrence-of-duplicate-records.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Preventing the Occurrence of Duplicate Records</title></head><body><b>Nov 10, 2020</b> by Robert Gravelle<br/><br/><p>Many database administrators (DBAs) spend at least some of their time trying to identify and remove duplicate records from database tables. Much of this time could be diverted to other pursuits if more attention was paid to preventing duplicates from being inserted in the first place. In principle, this is not difficult to do. However, in practice, it is all-too-possible to have duplicate rows and not even know it!  Today's blog will present a few strategies for minimizing the occurrence of duplicate records in your database tables by preventing them from being inserted into a table.</p><h1 class="blog-sub-title">Employ PRIMARY KEY and UNIQUE Indexes</h1><p>To ensure that rows in a table are unique, one or more columns must be constrained to reject non-unique values. By satisfying this requirement, any row in the table may be quickly retrieved via its unique identifier. We can enforce column uniqueness by including a PRIMARY KEY or UNIQUE index on the applicable fields. </p><p>To illustrate, let's take a look at a table that contains product details such as the name, line, vendor, description, etc. Here it is in the <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> Table Designer:</p><img alt="products_table_in_table_designer (158K)" src="https://www.navicat.com/link/Blog/Image/2020/20201110/products_table_in_table_designer.jpg" style="max-width: 800px; height: auto" /><p>Navicat indicates fields that are part of a KEY using a key icon under the <i>Key</i> heading, along with a number that denotes its position within a composite key. A single key with a number <i>1</i> tells us that the <i>productCode</i> is the sole Primary Key (PK) column for the table. By definition, the PK must be unique and may not contain NULL values.</p><p>Meanwhile, if we then take a look at the <i>Indexes</i> tab, it shows that the <i>productLine</i> column is indexed as well:</p><img alt="productLine_index (23K)" src="https://www.navicat.com/link/Blog/Image/2020/20201110/productLine_index.jpg" height="99" width="576" /><p>In many instances, a single column is not sufficient to make a row unique, so we must add additional fields to the PK. Here's a <i>payments</i> table that requires both the <i>customerNumber</i> and <i>checkNumber</i> to make a unique PK because the same customer can make several payments:</p><img alt="payments_table_in_table_designer (74K)" src="https://www.navicat.com/link/Blog/Image/2020/20201110/payments_table_in_table_designer.jpg" height="285" width="799" /><h1 class="blog-sub-title">The Downside of Auto-incrementing Primary Keys</h1><p>Many database designers/developers (myself included!) love using numeric auto-incrementing PKs because:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">   <li> Ease of use. The database takes care of them for you!</li>   <li> Collisions are impossible because each new row receives a unique integer.</li></ul><p>Here is just such a table:</p><img alt="actor_table_in_table_designer (75K)" src="https://www.navicat.com/link/Blog/Image/2020/20201110/actor_table_in_table_designer.jpg" height="342" width="722" /><p>In Navicat, all you need to do to create an auto-incrementing PK is to choose a numeric data type (such as an integer) and check the Auto Increment box. That will cause all values for that column to be generated by the database.</p><p>And now for the bad news; auto-incrementing PKs do little to prevent duplicate rows - especially if you don't include any other table indexes. For example, imagine that the above table did not have any additional indexes. There would be nothing stopping someone from inserting a row with the exact same <i>first_name</i> and <i>last_name</i> as an existing row.</p><p>In fact, we can put that theory to the test right now! In Navicat, we can insert a new row directly into the Grid by clicking the plus (+) button:</p><img alt="duplicate_row (10K)" src="https://www.navicat.com/link/Blog/Image/2020/20201110/duplicate_row.png" height="155" width="428" /><p>As expected (feared), the duplicate name was accepted!</p><img alt="duplicate_row_accepted (5K)" src="https://www.navicat.com/link/Blog/Image/2020/20201110/duplicate_row_accepted.png" height="71" width="439" /><h1 class="blog-sub-title">Conclusion</h1><p>The moral of today's story is that, while one can prevent duplicate rows from being inserted, this does not necessarily mean that all data duplication can be prevented.  At a minimum, designers/developers must take additional precautions by either employing rigorous normalization in database design or by performing specific validation at the application level.</p></body></html>]]></description>
</item>
<item>
<title>A Guide to Refreshing Test Data</title>
<link>https://www.navicat.com/company/aboutus/blog/1694-a-guide-to-refreshing-test-data.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>A Guide to Refreshing Test Data</title></head><body><b>Oct 30, 2020</b> by Robert Gravelle<br/><br/><p>The periodic reverting of database instances to a baseline dataset is a common practice in development and test environments. Case in point, the office where I work does so on a regular basis, whenever data diverges too much from the baseline. This is required because developers and automated tests expect the data to be of a certain quantity and quality.  There is no right way to overwrite table contents, so you should choose an option based on your organization's particular goals and circumstances.  In today's blog, I'll share what we do where I work as well as my standard process at <a class="default-links" href="mailto:rgconsulting@robgravelle.com">Gravelle Web Development</a>.</p><h1 class="blog-sub-title">SQL Scripting at Work</h1><p>An SQL script is one that contains a set of SQL commands saved as a file, typically with a .sql extension. An SQL script can contain both SQL statements or PL/SQL blocks. SQL scripts provide a simple means of grouping related SQL functionality for the purpose of reusing whenever needed. All popular relational databases can run an SQL script directly from the command line.  For example, in MySQL, you can invoke the SQL script as follows:</p><pre>shell&gt; mysql --user="username" --database="databasename" --password="yourpassword" &lt; "path to sql file"</pre><h1 class="blog-sub-title">Creating a Table Refresh Script</h1><p>In my experience, the easiest way to create a script to reset table data is to use a dump utility.  Keeping with MySQL, the installation process includes the mysqldump utility.  It can create SQL statements to both truncate the table and repopulate rows with baseline data.  Mysqldump has a number of options, but all that is really needed are the database and SQL file names:<pre>shell> mysqldump db_name > backup-file.sql </pre><p>Which ever utility you use, it's crucial that the generated SQL includes a DROP TABLE statement before the table population. Mysqldump has a <i>--add-drop-table</i> option, but it's ON by default, so you don't need to include it under normal usage.</p><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> database development and admin clients include a <i>Dump SQL File</i> command. Like mysqldump, it also provides many options, including whether to dump both the structure and data or structure only:</p><img alt="dump_sql_file_command (105K)" src="https://www.navicat.com/link/Blog/Image/2020/20201030/dump_sql_file_command.jpg" height="664" width="508" /><p>Here is a sample generated file in the Navicat SQL Editor. As you can see, the DROP TABLE IF EXISTS command precedes the CREATE statement:</p><img alt="sql_file_contents (217K)" src="https://www.navicat.com/link/Blog/Image/2020/20201030/sql_file_contents.jpg" height="842" width="806" /><h1 class="blog-sub-title">Truncating a Table</h1><p>Whereas the above script recreates the table from scratch, you can also TRUNCATE a table and then re-insert the data from a backup table using the INSERT INTO command:</p><pre>TRUNCATE TABLE dbo.T1; INSERT INTO D1.dbo.T1 SELECT * FROM D2.dbo.T1;</pre><p>Here's an example in Navicat:</p><img alt="insert_into_command (47K)" src="https://www.navicat.com/link/Blog/Image/2020/20201030/insert_into_command.jpg" height="320" width="589" /><p>Note that, behind the scenes, the database is still dropping the table and re-creating it via the SQL CREATE TABLE statement. Besides being faster than the EMPTY TABLE command, TRUNCATE TABLE resets all auto-increment fields to start over at 1, which is usually preferable letting them run.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we explored a couple of ways to reset table data to a baseline for development and test environments.  <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a> can help create .sql scripts as well as execute them with ease.  Moreover, its Automation Tool can schedule scripts to run according to a variety of schedules so that you can set up your jobs and then let Navicat handle the rest.</p></body></html>]]></description>
</item>
<item>
<title>All About ORDINAL_POSITION</title>
<link>https://www.navicat.com/company/aboutus/blog/1693-all-about-ordinal_position.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>All About ORDINAL_POSITION</title></head><body><b>Oct 23, 2020</b> by Robert Gravelle<br/><br/><p>In relational databases, including MySQL, SQL Server, Oracle, and others, the ORDINAL_POSITION refers to a column's location in terms of ordering within a table or query output.  In today's blog, we'll learn how to use ordinal positioning to present columns in our preferred order, using <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat Premium</a> as our database client.</p><h1 class="blog-sub-title">How the ORDINAL_POSITION Affects Query Output</h1><p>When viewed in a grid format, columns are ordered from left to right. For instance, here are the columns of an order details table as they appear in Navicat's Grid View:</p><img alt="orderdetails_columns_in_grid_view (95K)" src="https://www.navicat.com/link/Blog/Image/2020/20201023/orderdetails_columns_in_grid_view.jpg" height="501" width="547" /><p>Meanwhile, in the Table Designer, you can see that the left to right column order above corresponds to the top -&gt; down order there:</p><img alt="orderdetails_columns_in_table_designer (101K)" src="https://www.navicat.com/link/Blog/Image/2020/20201023/orderdetails_columns_in_table_designer.jpg" height="341" width="822" /><p>By default, columns occupy the same ordering in which they were created. Column ordering is important because it determines their order for every single table, view, and SELECT queries that are in the database.  As we'll see a little later on, if we don't like the order of columns in a table, we aren't stuck with it.</p><h1 class="blog-sub-title">Obtaining Ordinal Positioning of a Table</h1><p>Since you can create views that can obscure the true ordinal positioning of a table, relational databases offer a means of finding out what it is.  ORDINAL_POSITION is a column in the INFORMATION_SCHEMA.COLUMNS table. As such, you can find the ordinal position of columns in a table by querying it as follows:</p><img alt="information_schema_columns_table (39K)" src="https://www.navicat.com/link/Blog/Image/2020/20201023/information_schema_columns_table.jpg" height="289" width="382" /><h1 class="blog-sub-title">Changing the Ordinal Positioning of a Table</h1><p>So what do you do if you'd like columns to appear in a different order than they were created? AS I alluded to earlier, you don't have to keep a column's original ORDINAL_POSITION. You can change it using one of the following statements:</p><pre>ALTER TABLE orderdetails MODIFY COLUMN orderLineNumber smallint(6) AFTER quantityOrdered; </pre><p>OR:</p><pre>ALTER TABLE orderdetails CHANGE COLUMN orderLineNumber smallint(6) AFTER quantityOrdered; </pre><p>The above statements move the <i>orderLineNumber</i> column from last position to second-last.</p><p>In Navicat, the Table Designer has Move Up and Move Down buttons that make changing column ordering a snap:</p><img alt="move_up_button (58K)" src="https://www.navicat.com/link/Blog/Image/2020/20201023/move_up_button.jpg" height="208" width="615" /><p>After selecting the <i>orderLineNumber</i> column, every click of the Move Up button changes its ORDINAL_POSITION by one place:</p><img alt="new_orderLineNumber_position (41K)" src="https://www.navicat.com/link/Blog/Image/2020/20201023/new_orderLineNumber_position.jpg" height="178" width="515" /><p>After saving the table design, the <i>orderLineNumber</i> is now before the <i>priceEach</i>:</p><img alt="new_orderLineNumber_position_in_grid_view (58K)" src="https://www.navicat.com/link/Blog/Image/2020/20201023/new_orderLineNumber_position_in_grid_view.jpg" height="278" width="546" /><h1 class="blog-sub-title">Referencing ORDINAL_POSITION in SELECT Queries</h1><p>Ordinal Positioning is not just about default column ordering, it can also be referenced in SELECT queries as a short-cut for column names.  To illustrate, here's a query that references a couple of columns in the ORDER BY clause: </p><img alt="orderdetails_query_with_column_name (111K)" src="https://www.navicat.com/link/Blog/Image/2020/20201023/orderdetails_query_with_column_name.jpg" height="622" width="548" /><p>Rather than spell out each column name in the ORDER BY clause, we can simply refer to the column's ORDINAL_POSITION within the table:</p><img alt="orderdetails_query (109K)" src="https://www.navicat.com/link/Blog/Image/2020/20201023/orderdetails_query.jpg" height="621" width="552" /><p>Shorter SQL, same results!</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how the ORDINAL_POSITION affects a column's location in terms of ordering within a table or query output, as well as how to use ordinal positioning to present columns in our preferred order.</p><p>Interested in Navicat Premium? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>What Is Ransomware and Why You Should Be Concerned</title>
<link>https://www.navicat.com/company/aboutus/blog/1692-what-is-ransomware-and-why-you-should-be-concerned.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>What Is Ransomware and Why You Should Be Concerned</title></head><body><b>Oct 20, 2020</b> by Robert Gravelle<br/><br/><p>Ransomware attacks are nothing new.  In fact, the first known ransomware virus was created in 1989! So why bring them up now? While the frequency of ransomware attacks has fluctuated over the years, recent statistics show that ransomware attacks rose significantly in frequency in 2019 and won't be letting up any time soon.  For that reason, you should know what ransomware attacks are, how they work, and how best to deter malicious entities from targeting your organization.  And that is exactly what you'll learn here today!</p><h1 class="blog-sub-title">Ransomware Attacks Explained</h1><p>Ransomware attacks typically use much the same tricks to gain access to your server(s) as other hacking attacks.  For instance, a phishing attack in the form of an e-mail with an infected attachment may be sent to employees of an organization. All it takes is for one user to open the attachment to allow the ransomware to execute on the network. Once inside your network, the ransomware immediately begins to encrypt files on the target system.  Moreover, the malicious software may also attempt to encrypt files on any mapped drives or network-connected device that the infected machine has write permission to. In the case of databases, the database file may be encrypted. It's where the database schema and data are stored on the hard drive. What's worse, not only do your data and log files become unreadable, but you can even lose access to your backups. Once your backups are encrypted, you (and your entire organization) may be at the mercy of the attacker(s)! </p><p>Once the database file has been encrypted, someone will be contacted by the attacker(s), or the server will respond to requests asking for payment in the form of Bitcoin for a key to decrypt the files. </p><img alt="800px-Ransomware-pic (81K)" src="https://www.navicat.com/link/Blog/Image/2020/20201020/800px-Ransomware-pic.jpg" height="450" width="800" /><h1 class="blog-sub-title">What Can Be Done to Prevent Ransomware Attacks</h1><p>The old sports adage that "the best defence is a good offense" holds just as true for cyberattacks.  One would hope that your organization maintains a strong firewall that keeps as many malicious emails out of employees' hands as possible.  Users also play a vital role as guardians of the organization's infrastructure.  As such, it is every employee's responsibility to only open attachments who's contents they know are safe.  Even family and friends cannot be completely trusted, as their email accounts may be hacked.</p><p>So, what can Database Administrators (DBA) do to help bolster security? </p><p>Quite a lot, actually.</p><p>In many instances, ransomware attack have capitalized on an unpatched vulnerability.  Upon reviewing the thousands of MySQL ransomware attacks a few years ago, investigators determined that the underlying security weakness in the vast majority of cases was either gaping holes in security protocols or even an almost complete lack thereof!  The good news is that you can prevent being a victim of many ransomware attacks simply by installing the latest security patches as soon as they are released. </p><p>System administrators of MariaDB, MySQL, and SQL Server can help protect their server instances by monitoring performance on a regular basis. <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor </a> is the best way to always know exactly what is happening on your database servers. Monitoring is especially effective in cases where hackers make tentative intrusions into systems to ascertain the best targets on which to introduce their malware. Vigilant monitoring helps highlight unusual system activity so it can be addressed by database administrators and security personnel. </p><img alt="02.Product_01_NavicatMonitor_01a_Dashboards_Comfort (103K)" src="https://www.navicat.com/link/Blog/Image/2020/20201020/02.Product_01_NavicatMonitor_01a_Dashboards_Comfort.png" height="790" width="1200" /><h1 class="blog-sub-title">Conclusion</h1><p>With ransomware attacks on the rise in 2020, now is the time to take steps to protect your data and organization.  <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor </a> helps catch suspicious activity before your database is compromised! </p></body></html>]]></description>
</item>
<item>
<title>Filtering Dates by Month</title>
<link>https://www.navicat.com/company/aboutus/blog/1688-filtering-dates-by-month.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Filtering Dates by Month</title></head><body><b>Oct 14, 2020</b> by Robert Gravelle<br/><br/><p>Months can be notoriously difficult to work with due to a variety of factors, including their variability in length. To make database developers' jobs easier, most relational databases (DBMS) offer functions such as MONTH() and MONTHNAME(). These two functions are great for grouping results by month and for displaying their values.  In today's blog, we'll learn how to use specialized SQL functions for working with months.</p><h1 class="blog-sub-title">Working with Month Functions</h1><p>The MONTH() and MONTHNAME() functions are both implemented in MySQL. However, every database type has their own date and time functions, so you will likely have to refer to the documentation to find the equivalent functions for your database. To illustrate, SQL Server does not provide the MONTHNAME() function.  Instead, DATENAME() may be employed to return any date part, include the month name.  </p><p>Navicat greatly simplifies looking up the right date function for your DBMS via its auto-complete list.  More than just built-in functions, it includes everything from tables and views to stored procedures and user functions. Here is the MONTH() function:</p><img alt="auto-complete (30K)" src="https://www.navicat.com/link/Blog/Image/2020/20201014/auto-complete.jpg" height="243" width="502" /><p>Once you select a function or procedure, it is inserted into the SQL Editor at the cursor position with tabbable input parameters, ready to be filled in:</p><img alt="monthname_function_cursor (7K)" src="https://www.navicat.com/link/Blog/Image/2020/20201014/monthname_function_cursor.jpg" height="57" width="248" /><h1 class="blog-sub-title">MONTH() and MONTHNAME() Described</h1><p>The MONTH() function accepts a date and returns an integer value which represents the month of a specified date between 1 and 12:</p><img alt="month_function (27K)" src="https://www.navicat.com/link/Blog/Image/2020/20201014/month_function.jpg" height="269" width="381" /><p>Meanwhile, MONTHNAME() returns the month name for a specified date:</p><img alt="monthname_function (29K)" src="https://www.navicat.com/link/Blog/Image/2020/20201014/monthname_function.jpg" height="264" width="382" /><h1 class="blog-sub-title">Working with the MONTH() and MONTHNAME() Functions</h1><p>Using both these functions together allows us to group and/or sort by month order while displaying the month name. To see them in action, let's write a query against the Sakila Sample Database that shows movie rentals for a given month. In case you aren't familiar with the Sakila Sample Database, it's a learning database for MySQL that represents a fictitious movie rental store chain. Here are the first several rows of the rental table in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>:</p><img alt="rental_table (337K)" src="https://www.navicat.com/link/Blog/Image/2020/20201014/rental_table.jpg" height="763" width="958" /><p>Now, let's select the rental date and some film details, ordered by rental_date and film title:</p><img alt="rental_query (160K)" src="https://www.navicat.com/link/Blog/Image/2020/20201014/rental_query.jpg" height="763" width="536" /><p>To limit results to a given month, we can add a WHERE clause that includes the MONTH() function as follows:</p><img alt="rental_query_filtered_by_month (151K)" src="https://www.navicat.com/link/Blog/Image/2020/20201014/rental_query_filtered_by_month.jpg" height="750" width="460" /><p>To see the month name, we just have to include the MONTHNAME() function in the column list:</p><img alt="rental_query_with_month_name (139K)" src="https://www.navicat.com/link/Blog/Image/2020/20201014/rental_query_with_month_name.jpg" height="592" width="534" /><h1 class="blog-sub-title">Filtering Results by Month AND Year</h1><p>Without specifying a year in the WHERE clause, filtering results by a specified month will show results that span across ALL years of data. In many cases, this is not what you want.  The solution is to include the YEAR() function, along with the MONTH() in the WHERE clause.  Here's the updated query to limit results to May of 2005:</p><img alt="rental_query_with_year (140K)" src="https://www.navicat.com/link/Blog/Image/2020/20201014/rental_query_with_year.jpg" height="670" width="485" /><h1 class="blog-sub-title">Conclusion</h1><p>Database functions such as MONTH() and MONTHNAME() are highly useful for grouping results by month and for displaying their values.  The two functions that we looked at today (three if you include YEAR()!) are supported by MySQL.  For other databases, be sure to refer to the documentation to find their equivalent functions.  After that, you can use Navicat's auto-suggest feature to insert the appropriate function into your SQL statements.</p></body></html>]]></description>
</item>
<item>
<title>Achieving Lightning Fast Query Response Time in MySQL 8</title>
<link>https://www.navicat.com/company/aboutus/blog/1687-achieving-lightning-fast-query-response-time-in-mysql-8.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Achieving Lightning Fast Query Response Time in MySQL 8</title></head><body><b>Oct 9, 2020</b> by Robert Gravelle<br/><br/><p>Behind the slick User Interface (UI) of modern web applications, there are asynchronous services fetching data from the database with a multitude of objectives, including loading drop-downs, populating data tables, Synchronizing components, and many others. Any lagging of the back-end processes will be perceived by the user as a slow or even a non-responsive application.  This in turn degrades the user experience and sours their opinion of your application.  For that reason, it is imperative that you whittle down your query response time to the lowest feasible value.  In many cases, this means measuring query turn-around in hundreds of a second, as opposed to seconds. </p><p>Needless to say, achieving sub-second response times takes some doing beyond defining indexes on searchable fields. In today's blog, we'll take a look at some techniques for making your queries maximally performant in MySQL 8.</p><h1 class="blog-sub-title">The EXPLAIN Command</h1><p>A good way to see what a query needs in order to perform better is to use the EXPLAIN command. It returns a formatted description of the query optimizer's execution plan for the specified statement. You can use this information to analyze and troubleshoot the query.</p><p>By default, EXPLAIN output represents the query plan as a hierarchy whereby each level represents a single database operation that the optimizer defines to execute the query. It takes a bit of practice to get accustomed to EXPLAIN's output, but the more you do it, the better you'll get understanding where your queries need tweaking.</p><p>In <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-mysql" target="_blank">Navicat for MySQL</a>, there's a button in the SQL Editor that runs EXPLAIN for me. Results are displayed in an easy-to-read grid format:</p><img alt="explain_button (47K)" src="https://www.navicat.com/link/Blog/Image/2020/20201009/explain_button.png" height="592" width="1127" /><h1 class="blog-sub-title">Query Profiling</h1><p>You can use query profiling to measure query execution time. Here's how do to that in MySQL:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Start the profiler with:<pre>SET profiling = 1; </pre></li><li>Then execute your query with:<pre>SHOW PROFILES;</pre></li><li>You'll then see a list of queries the profiler has statistics for. Choose which query to examine with the statement:<pre>SHOW PROFILE FOR QUERY 1; </pre>...or whatever number is assigned to your query.</li></ul><p>You'll then get is a list where exactly how much time was spent during the query:</p><img alt="show_profile (179K)" src="https://www.navicat.com/link/Blog/Image/2020/20201009/show_profile.jpg" height="791" width="788" /><p>You can also get a profile of CPU usage:</p><img alt="show_cpu_profile (118K)" src="https://www.navicat.com/link/Blog/Image/2020/20201009/show_cpu_profile.jpg" height="674" width="441" /><h1 class="blog-sub-title">Analyzing Query Performance using a Navicat Monitor</h1><p>Navicat Monitor is an agentless remote server monitoring tool that is packed with powerful features to make your monitoring effective as possible. It works with MySQL, MariaDB and SQL Server, as well as cloud databases like Amazon RDS, Amazon Aurora, Oracle Cloud, Google Cloud and Microsoft Azure. The Query Analyzer screen shows information of all executing queries. You can use it to further analyze and evaluate your query performance:</p><img alt="query_analyzer (125K)" src="https://www.navicat.com/link/Blog/Image/2020/20201009/query_analyzer.jpg" height="621" width="1023" /><p>The screen is divided into several sections:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Latest Deadlock Query: Shows the transaction information of the latest deadlock detected in the selected instance.</li><li>Process List: Displays the total number of running processes for the selected instance, and lists the last 5 processes including ID, command type, user, database and time information.</li><li>Query Analyzer: Displays information about query statements with customizable and sortable columns.</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we'll take a look at some techniques for making your queries as expeditious as possible in MySQL 8.</p><p>Interested in Navicat Monitor? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Preventing All Records from Being Deleted Within a Stored Procedure</title>
<link>https://www.navicat.com/company/aboutus/blog/1686-preventing-all-records-from-being-deleted-within-a-stored-procedure.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Preventing All Records from Being Deleted Within a Stored Procedure </title></head><body><b>Oct 6, 2020</b> by Robert Gravelle<br/><br/><p>It's fairly common to allow certain users to perform ad-hoc updates or deletions to tables.  Data Manipulation Language (DML) operations such as these always come with risk, and incidents may occur where someone accidentally issues a Delete command without a WHERE clause, thereby deleting all rows in a table! Luckily, there are some simple steps you can take to prevent accidental (or deliberate!) destructive DML operations. We'll examine a couple of these in today's blog.</p>  <h1 class="blog-sub-title">A Dangerous Deletion Procedure</h1><p>As a starting point, let's take a stored procedure that will delete rows in a table based on a user-supplied where clause. This SQL Server procedure, shown in the <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-sqlserver" target="_blank">Navicat for SQL Server</a>development and admin client, accepts a <i>table</i> and <i>whereclause</i> parameter and returns the number of rows deleted:</p><img alt="delete_from_table_procedure (106K)" src="https://www.navicat.com/link/Blog/Image/2020/20201006/delete_from_table_procedure.jpg" height="512" width="805" /><p>In Navicat, we can run a procedure right from the editor via the Execute button. Clicking it brings up a dialog to enter parameters:</p><img alt="input_param_dialog (21K)" src="https://www.navicat.com/link/Blog/Image/2020/20201006/input_param_dialog.jpg" height="216" width="418" /><p>The <i>delcnt</i> specifies the count of records we expect to be deleted. By leaving it blank, if any rows are deleted, the transaction is rolled back, and the row(s) preserved. Had we supplied a number, the procedure would have compared it to the <i>@@rowcount</i> server variable after the delete operation to determine if the number of records deleted match the expected number. I our case, a message is displayed to tell us how many rows <i>would</i> have been deleted:</p><img alt="would_have_been_deleted_message (30K)" src="https://www.navicat.com/link/Blog/Image/2020/20201006/would_have_been_deleted_message.jpg" height="163" width="472" /><p>As expected, the <i>actcnt</i> is zero, confirming that no rows were actually deleted:</p><img alt="actcnt_variable (20K)" src="https://www.navicat.com/link/Blog/Image/2020/20201006/actcnt_variable.jpg" height="139" width="382" /><p>The <i>delcnt</i> parameter acts as a built-in fail-safe in that it forces the user to specify how many rows that he/she expects to be deleted. We could also add one additional layer of safety by checking that the <i>whereclause</i> parameter was supplied:</p><img alt="whereclause_check (17K)" src="https://www.navicat.com/link/Blog/Image/2020/20201006/whereclause_check.jpg" height="115" width="515" /><h1 class="blog-sub-title">Maximizing <i>delcnt</i> Variable Checks</h1><p>Disallowing an empty where clause is not without drawbacks, in that there is nothing stopping someone from entering something like "id is not null". The best we can say about this safety check is that at least it would only fail in cases where the user to deliberately deletes all rows in a table.</p><p>For that reason, perhaps a better solution may be to expand on the idea of comparing the actual count of affected (deleted) rows to the expected count (@delcnt). With only a few extra lines of code, we can count the number of rows in the table and rollback the transaction if the number of affected rows is equal to the total number of rows in the table.  This can be accomplished using the built-in <i>sp_executesql</i> stored procedure. It supports the use of both input and output parameters so we can store the results of the count(*) function to a variable. Here is the new code:</p><img alt="delete_from_table_procedure_with_actcnt_check (148K)" src="https://www.navicat.com/link/Blog/Image/2020/20201006/delete_from_table_procedure_with_actcnt_check.jpg" height="707" width="819" /><p>Now, if we try to run a query that would delete all of the rows in the table such as follows:</p><img alt="input_param_dialog_with_destructive_whereclause_param (21K)" src="https://www.navicat.com/link/Blog/Image/2020/20201006/input_param_dialog_with_destructive_whereclause_param.jpg" height="216" width="418" /><p>...the deletion is prevented:</p><img alt="actcnt_variable_check_validation_message (22K)" src="https://www.navicat.com/link/Blog/Image/2020/20201006/actcnt_variable_check_validation_message.jpg" height="144" width="488" /><h1 class="blog-sub-title">Conclusion</h1><p>While there is no sure-fire way to prevent data loss due to accidental or deliberate deletion, every counter-measure that we employ helps minimize the chances of a catastrophic event from taking place.</p><p>Interested in Navicat for SQL Server? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-sqlserver" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Obtaining Meta-data about Database Table Columns</title>
<link>https://www.navicat.com/company/aboutus/blog/1682-obtaining-meta-data-about-database-table-columns.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Obtaining Meta-data about Database Table Columns</title></head><body><b>Sep 22, 2020</b> by Robert Gravelle<br/><br/><p>Certain relational databases, including MySQL and SQL Server, have an INFORMATION_SCHEMA system database. It contains database metadata, such as the names of databases, tables, the column data types, and even access privileges. It's also sometimes referred to as the data dictionary or system catalog. Regardless of how you refer to it, the INFORMATION_SCHEMA database is the ideal place to obtain details about table columns. In today's blog, we'll use the INFORMATION_SCHEMA database to find out whether or not a column exists and how many columns a particular table has.</p><h1 class="blog-sub-title">Viewing the INFORMATION_SCHEMA database in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a></h1><p>Being a system table, you won't be able to see the INFORMATION_SCHEMA database unless you explicitly tell Navicat to show it. To do that, add the INFORMATION_SCHEMA database to the Databases list defined within a database connection:</p><img alt="edit_connection_dialog (75K)" src="https://www.navicat.com/link/Blog/Image/2020/20200922/edit_connection_dialog.jpg" height="667" width="562" /><p>That allows us to open the Columns table in the Table Designer or Viewer:</p><img alt="INFORMATION_SCHEMA_columns_table (250K)" src="https://www.navicat.com/link/Blog/Image/2020/20200922/INFORMATION_SCHEMA_columns_table.jpg" height="796" width="948" /><p>The sheer number of columns should give you some idea as to what types of information we can obtain from the Columns table.</p><p><strong>Note: the INFORMATION_SCHEMA is a read-only database, so you can't make any changes to its structure or contents.</strong></p><h1 class="blog-sub-title">Introducing the Column Count Query</h1><p>The Columns table may be queried like any other to look up information about table columns.  Here's the basic syntax to do so:</p><pre>SELECT count(*) AS anyName FROM information_schema.columns WHERE [table_schema = 'yourSchemaName' AND] table_name = 'yourTableName'; </pre>   <p>The table_schema is the database in which the table resides. It's not crucial to the query, but in cases where you have more than one database with the same column name, it filters results to that particular database table. In a situation where you maintain multiple copies of the same database, the column count will be for of all tables with the same name.</p><p>For instance, I have four copies of the Sakila database:</p><img alt="MySQl_connection_databases (24K)" src="https://www.navicat.com/link/Blog/Image/2020/20200922/MySQl_connection_databases.jpg" height="322" width="205" /><p>As a result, when I run the query without the table_schema, I get a column count of 51, which is on the high side!</p><img alt="select_column_count_of_film_table (34K)" src="https://www.navicat.com/link/Blog/Image/2020/20200922/select_column_count_of_film_table.jpg" height="262" width="394" /><p>Specifying the table_schema in a more accurate column count of 12:</p><img alt="select_column_count_of_film_table_with_schema (40K)" src="https://www.navicat.com/link/Blog/Image/2020/20200922/select_column_count_of_film_table_with_schema.jpg" height="268" width="444" /><p>If we now open the film table in the Table Designer, we can confirm that 12 columns is correct:</p><img alt="film_table_design (87K)" src="https://www.navicat.com/link/Blog/Image/2020/20200922/film_table_design.jpg" height="520" width="518" /><h1 class="blog-sub-title">Determining Whether or Not a Column Exists</h1><p>In a dynamic application, you may want to look up information about a column, including whether or not it exists.  Here's a query that lists every instance of the "title" column, along with meta-data about each of them, including which schema and table it belongs to, as well as details such as the default value, data type, and maximum length:</p><img alt="finding_column_info (166K)" src="https://www.navicat.com/link/Blog/Image/2020/20200922/finding_column_info.jpg" height="572" width="1083" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to utilize the INFORMATION_SCHEMA database to find out whether or not a column exists and how many columns a particular table has.</p><p>Interested in Navicat Premium? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Selecting the Second Highest Value from a Table</title>
<link>https://www.navicat.com/company/aboutus/blog/1681-selecting-the-second-highest-value-from-a-table.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Selecting the Second Highest Value from a Table</title></head><body><b>Sep 17, 2020</b> by Robert Gravelle<br/><br/><p>It's been said that second place is the first loser.  So, who needs an SQL statement to find out who these under achievers are?  Surprisingly, a lot of people.  In fact, the official term for this type of query is "nth highest value of a column". That's because techniques for selecting the 2nd highest value may also be applied for any value. In today's blog, we'll learn how to use ORDER BY DESC in conjunction with the LIMIT clause to obtain the 2nd highest value, and others, from a table. </p><h1 class="blog-sub-title">Introducing the Classic Models Database</h1><p>The <a class="default-links" href="https://www.mysqltutorial.org/mysql-sample-database.aspx/" target="_blank">classicmodels database</a> is a MySQL sample database to help learn SQL quickly and effectively. The classicmodels database represents a retailer of scale models of classic cars. It contains typical business data such as customers, products, sales orders, sales order line items, etc.</p><p>Here's a peek at the contents of the payments table in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>:</p><img alt="payments_table (171K)" src="https://www.navicat.com/link/Blog/Image/2020/20200917/payments_table.jpg" height="646" width="659" /><p>We will compose a query that selects the 2nd highest payment from this table.</p><h1 class="blog-sub-title">About the LIMIT Statement</h1><p>The LIMIT clause may be added to a SELECT statement to constrain the number of rows returned. The LIMIT clause can accept either one or two arguments of zero or positive whole integers.</p><p>Here's the syntax:</p><pre>SELECT     select_listFROM    table_nameLIMIT [offset,] row_count;</pre><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">   <li> The offset specifies the offset of the first row to return. The offset of the first row is 0, not 1.</li>   <li> The row_count specifies the maximum number of rows to return.</li></ul><h1 class="blog-sub-title">Selecting the 2nd Highest Payment</h1><p>Knowing what we know about the LIMIT clause, we can now structure our SELECT statement as follows to fetch the 2nd highest value:</p><pre>SELECT * FROM yourTableName ORDER BY DESC yourColumnName LIMIT 1,1;</pre><p>Here is the equivalent statement to SELECT the 2nd highest amount from the payments table:</p><img alt="limit_query (48K)" src="https://www.navicat.com/link/Blog/Image/2020/20200917/limit_query.jpg" height="266" width="540" /><h3>Verifying the Results</h3><p>In Navicat, we can sort a table or view by any column by hovering the mousepointer over the column header and then clicking the context menu arrow.  We can then choose the sort order from the list:</p><img alt="sort_menu (14K)" src="https://www.navicat.com/link/Blog/Image/2020/20200917/sort_menu.jpg" height="146" width="258" /><p>If we refer to the highlighted row in the image below, we can confirm that it is the correct one.</p><img alt="2nd_row_hightlighted (99K)" src="https://www.navicat.com/link/Blog/Image/2020/20200917/2nd_row_hightlighted.jpg" height="357" width="639" /><h1 class="blog-sub-title">Selecting the Nth Highest Payment</h1><p>We can use the same syntax to fetch other amounts.  For example, you could return the fourth highest value of a column by using the following syntax:</p><pre>SELECT * FROM yourTableName ORDER BY DESC yourColumnName LIMIT 3,1;</pre><p>In fact, we can use this syntax for any ranking:</p><pre>SELECT * FROM yourTableName ORDER BY DESC yourColumnName LIMIT desiredRank - 1, 1;</pre><p>Here's the query to fetch the 10th highest payment amount:</p><img alt="10th_amount_query (48K)" src="https://www.navicat.com/link/Blog/Image/2020/20200917/10th_amount_query.jpg" height="266" width="538" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to use ORDER BY DESC in conjunction with the LIMIT clause to obtain the Nth highest value from a table. In next week's blog, we'll accomplish the same task using the TOP statement.</p><p>Interested in Navicat Premium? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Comparing the Semantics of Null, Zero, and Empty String in Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/1652-comparing-the-semantics-of-null,-zero,-and-empty-string-in-relational-databases.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Comparing the Semantics of Null, Zero, and Empty String in Relational Databases</title></head><body><b>Sep 8, 2020</b> by Robert Gravelle<br/><br/><p>All-too-often, database developers and administrators use Nulls, Zeroes, and Empty Strings interchangeably within their database tables. That's unfortunate, because Null, Zero, and an Empty String each represent something different in relational databases (RDBMS). As such, using these values incorrectly, or choosing the wrong one, can have enormous ramifications on the operation of your database and applications that rely on it. In today's blog, we'll explore how to best utilize the Null, Zero, and Empty String in database design and general usage.</p><h1 class="blog-sub-title">What is Null?</h1><p>The value Null has a long history in both relational databases and programming languages.  It was devised as a special value to represent the intentional absence of any value.  As such, it can be assigned to any <i>nullable</i> column. To designate a column as <i>nullable</i>, simply include the NULL keyword, or just leave it out, as columns are <i>nullable</i> by default: </p><pre>CREATE TABLE table_name (    column1 datatype [ NULL | NOT NULL ],   column2 datatype [ NULL | NOT NULL ],   ... ); </pre><p>Navicat greatly simplifies the creation of tables via its Table Designer. In Navicat, to specify that a column may not contain NULL values, check the <i>Not null</i> checkbox:</p><img alt="Not_null_column (148K)" src="https://www.navicat.com/link/Blog/Image/2020/20200908/Not_null_column.jpg" height="566" width="866" /><p>In the Table Grid view, NULL values are represented as <i>(Null)</i>:</p><img alt="customers_table (202K)" src="https://www.navicat.com/link/Blog/Image/2020/20200908/customers_table.jpg" height="600" width="783" /><p>In terms of semantics, a missing value informs us that we do not know it.  Put more simply, a Null could mean "???". As we'll see shortly, this is not the case for either Zeroes or Blank Strings.</p><h1 class="blog-sub-title">The Zero Value</h1><p>A value of Zero (0) is a real number whose meaning is shared by other terms throughout the world, including nought (UK), naught (US), nil, zilch, zip, nada, scratch, and goose egg. Its value can be thought of as "Nothing". Consider a credit limit column. In that context, a value of 0.00 would indicate that the customer does not have credit. If the column is nullable, then a NULL value would mean that we don't know what the customer's credit limit is. In the first instance, we know what the customer's credit limit is, and that limit is zero.</p><p>In Navicat, we can set the default value of a column via the Default drop-down:</p><img alt="creditLimit_column (26K)" src="https://www.navicat.com/link/Blog/Image/2020/20200908/creditLimit_column.jpg" height="184" width="522" /><p>Note that NULL is the first option.</p><h1 class="blog-sub-title">The Empty String Demystified</h1><p>Much like Zero, an Empty String ("") differs from a NULL value in that the former specifically implies that the value was set to be empty, whereas NULL means that the value was not supplied or is unknown. As an example, let's consider a column that stores a cell phone number. An empty value would imply that the person does not have a cell phone, whereas a NULL would signify that he or she did not provide a number. These are two very different interpretations!</p><h1 class="blog-sub-title">Conclusion</h1><p>It is crucial for the database developer and administrator to understand the semantics of Nulls, Zeroes, and Empty Strings because using them incorrectly, or choosing the wrong value, can have enormous ramifications on the operation of the database and applications that interact with it.</p><p>To that end, <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat</a> Database Development and Administration clients facilitate working with Nulls, Zeroes, and Empty Strings by providing a Default drop-down and by clearly denoting Nulls in database tables.</p></body></html>]]></description>
</item>
<item>
<title>The Many Flavors of the SQL Count() Function</title>
<link>https://www.navicat.com/company/aboutus/blog/1650-the-many-flavors-of-the-sql-count-function.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The Many Flavors of the SQL Count() Function</title></head><body><b>Aug 27, 2020</b> by Robert Gravelle<br/><br/><p>If you have worked with relational databases (RDBMS) for any length of time, you have almost certainly utilized the SQL COUNT() function.  As such, you are no doubt already aware that the COUNT() function returns the number of rows or columns in a table, as filtered by the criteria specified in the WHERE clause.  Its flexible syntax and widespread support makes it one of the most versatile and useful functions in SQL.  In today's blog, we'll take a look at its many permutations and learn how to obtain a variety of counts.</p><h1 class="blog-sub-title">One Function, Many Input Parameter Variations</h1><p>As an ANSI SQL function COUNT() accepts parameters in the general SQL 2003 ANSI standard syntax. Having said that, different database vendors may have different ways of applying the COUNT() function. MySQL, PostgreSQL, and Microsoft SQL Server all follow the ANSI SQL syntax:</p><pre>COUNT(*)COUNT( [ALL|DISTINCT] expression )</pre><p>Meanwhile, the DB2 and Oracle syntax differs slightly:</p><pre>COUNT ({*|[DISTINCT] expression}) OVER (window_clause)</pre><p>In this blog, we'll be focusing on the SQL 2003 ANSI standard syntax.  Here are what all of the input parameters mean:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>ALL: As its name implies, ALL applies the COUNT to all values so that it returns the number of non-null values.</li><li>DISTINCT: Ignores duplicate values so that COUNT returns the number of unique non-null values.</li><li>expression: An expression made up of:<ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"> <li>a single constant, variable, scalar function</li> <li>a column name </li> <li>part of a SQL query that compares values against other values</li>  </ul> An expression may not include text or image types. Aggregate functions and subqueries are also not permitted.</li><li>*: COUNTs all the rows in the target table whether or not they include NULLs.</li></ul><h1 class="blog-sub-title">A Practical Example</h1><p>To sample some of the various syntax permutations and their effects on COUNT output, let's apply the COUNT() function to the following employees table, shown in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>:</p><img alt="employee_table (48K)" src="https://www.navicat.com/link/Blog/Image/2020/20200827/employee_table.jpg" height="281" width="451" /><p>Now, here's a query that counts several things:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>the total number of employees</li>    <li>the number of managers</li>    <li>the number of non-managers</li>    <li>the number of departments</li></ul><img alt="employee_count_query (69K)" src="https://www.navicat.com/link/Blog/Image/2020/20200827/employee_count_query.jpg" height="335" width="572" /><p>As the above query demonstrates, obtaining different counts is all about how in how you use the COUNT() function. In terms of when to use a specific column name as opposed to the asterisk, note that the former will not count nulls, whereas the latter typically will, because it includes all columns. As to when to use DISTINCT, consider using it for columns that have repeated values, which tends to include columns defined as non-unique and/or not the Primary Key.</p><h1 class="blog-sub-title">Conclusion</h1><p>The SQL COUNT() function's flexible syntax and widespread support makes it one of the most versatile and useful functions in SQL.  Speaking of syntax, if you ever find it difficult to remember the COUNT() function's syntax, you can let <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat</a> remind you!  The auto-complete suggestion list not only provides table and column names, but also stored procedures and functions, including COUNT(). You'll find not one, but two versions of it: one for simple usage and another for more complex uses: </p><img alt="count_function_in_suggestion_list (44K)" src="https://www.navicat.com/link/Blog/Image/2020/20200827/count_function_in_suggestion_list.jpg" height="252" width="494" /></body></html>]]></description>
</item>
<item>
<title>Storing Formatted Fields in a Database</title>
<link>https://www.navicat.com/company/aboutus/blog/1649-storing-formatted-fields-in-a-database.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Storing Formatted Fields in a Database</title></head><body><b>Aug 20, 2020</b> by Robert Gravelle<br/><br/><p>When it comes to storing formatted fields in a database, the adage "store raw, display pretty", usually holds true.  In most cases, raw values are the most conducive for working with in the database, allowing them to be queried, sorted, compared, and what-have-you. Yet, there are times that you may want to leave in special characters, where they are essential to formatting, such as HTML markup.  In today's blog, we'll explore both options with examples using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>.</p><h1 class="blog-sub-title">Parsing Out Special Characters</h1><p>Consider a field that stores phone numbers. Just in North America, a phone number can be represented in many different formats, including "(800) 555-1212", "800-555-1212" "800 555-1212", or "8005551212". To store short pieces of variable data like phone numbers, it's usually best to strip out special (non-numeric) characters at the application layer before storing to the database. The application would also be responsible for presenting  phone numbers in the predetermined display format. If you're concerned that all of this parsing and reformatting of data will place unnecessary strain on the server, rest assured that the processor overhead of formatting a phone number is trivial, taking far less than a microsecond in real-time.</p><h3>Data Type Considerations</h3><p>Some people think that numeric data like phone numbers lend themselves to a numeric data type such as int or bigint. That being said, most DBAs  choose the char or varchar type over numeric ones, as non numeric characters can be valid in phone numbers. A prime example being + as a replacement for 00 at the start of international numbers.</p><p>For evidence of this practice, look no further than the Sakila Sample Database. There, you'll find a phone number column in the address table.  Here it is in the Table Designer of Navicat Premium:</p><img alt="address_table_design (145K)" src="https://www.navicat.com/link/Blog/Image/2020/20200820/address_table_design.jpg" height="577" width="820" /><p>Here, the phone field is given a length of 20 in order to accommodate a variety of phone numbers. A quick glance at the table contents shows the varying phone number lengths:</p><img alt="address_table (226K)" src="https://www.navicat.com/link/Blog/Image/2020/20200820/address_table.jpg" height="496" width="921" /><p>The good thing about varchar fields is that, if you ever needed increase the column's capacity, you could do that easily enough using an ALTER TABLE statement, or simply by changing the Length property in Navicat.</p><h1 class="blog-sub-title">Preserving Formatting of Longer Fields</h1><p>For longer fields that contain formatted of <i>free form</i> user input, like descriptions, you may find it preferable to store them in a varchar or text column with all of the special characters included because there would be no way to reformat them for displaying later.</p><h3>Viewing Free Form Content in Navicat</h3><p>Content that spans more than one line can be difficult to work with because the typical Grid view only shows one row per record:</p><img alt="film_table_in_grid_view (229K)" src="https://www.navicat.com/link/Blog/Image/2020/20200820/film_table_in_grid_view.jpg" height="439" width="892" /><p>Navicat offers a couple of ways to view longer fields:</p><h3>Form View</h3><p>The Form View allows you to view, update, insert, or delete data as a form, in which the current record is displayed by field name and its value. Form View also provides pop-up menus with the following additional functions:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"> <li>set the field value as Null/Empty String</li> <li>use current field value as a filter</li> <li>format form view</li></ul><img alt="film_table_in_form_view (94K)" src="https://www.navicat.com/link/Blog/Image/2020/20200820/film_table_in_form_view.jpg" height="553" width="756" /><h3>Text Editing</h3><p>Navicat provides Text/Hex/Image/Web drop-down to view and edit TEXT/BLOB/BFile/HTML field content. To enable viewing/editing of a data type, select the type from the drop-down and toggle it to the ON position.  In the case of TEXT, you will see an editor appear at the bottom of the Table Grid: </p><img alt="text_editing (69K)" src="https://www.navicat.com/link/Blog/Image/2020/20200820/text_editing.jpg" height="226" width="740" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to store formatted data using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>.</p><p>Interested in Navicat Premium? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Applying Select Distinct to One Column Only</title>
<link>https://www.navicat.com/company/aboutus/blog/1647-applying-select-distinct-to-one-column-only.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Applying Select Distinct to One Column Only</title></head><body><b>Aug 12, 2020</b> by Robert Gravelle<br/><br/><p>Adding the DISTINCT keyword to a SELECT query causes it to return only unique values for the specified column list so that duplicate rows are removed from the result set. Since DISTINCT operates on all of the fields in SELECT's column list, it can't be applied to an individual field that are part of a larger group.  That being said, there are ways to remove duplicate values from one column, while ignoring other columns.  We'll be taking a look at a couple of those here today.</p><h1 class="blog-sub-title">The Test Data</h1><p>In order to test out our queries, we'll need a table that contains duplicate data. For that purpose, I added some extra emails to the Sakila Sample Database's customer table. Here's a screenshot in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s Grid view that shows a customer that has 2 associated emails:</p><img alt="costomer_table (64K)" src="https://www.navicat.com/link/Blog/Image/2020/20200812/costomer_table.jpg" height="199" width="648" /><p>If we were now to add the DISTINCT clause to a query whose field list contains other columns, it does not work because the row as a whole is unique:</p><img alt="select_distinct_result (100K)" src="https://www.navicat.com/link/Blog/Image/2020/20200812/select_distinct_result.jpg" height="485" width="555" /><p>So, what does work?  Let's find out!</p><h1 class="blog-sub-title">Using Group By</h1><p>The GROUP BY clause applies aggregate functions to a specific subset of data by grouping results according to one or more fields. When combined with a function such as MIN or MAX, GROUP BY can limit a field to the first or last instance, relative to another field.</p><p>Therefore, if we wanted to limit emails to one per customer, we could include a sub-query that groups emails by customer_id. We can then select other columns by joining the email column to the unique ones returned by the sub-query:</p><img alt="group_by (137K)" src="https://www.navicat.com/link/Blog/Image/2020/20200812/group_by.jpg" height="497" width="840" /><h1 class="blog-sub-title">Using a Window Function</h1><p>Another, albeit slightly more <i>advanced</i> solution, is to use a Window function.  Window functions are thus named because they perform a calculation across a set of table rows that are related to the current row. Unlike regular aggregate functions, window functions do not cause rows to become grouped into a single output row so that the rows retain their separate identities.</p><p>ROW_NUMBER() is a window function that assigns a sequential integer to each row within the partition of a result set, starting with 1 for the first row in each partition. In the case of customers' emails, the 1st email returns 1, the 2nd returns 2, etc. We can then use that value (referenced as "rn" below) to select only the 1st email for each customer.</p><img alt="windows_function (154K)" src="https://www.navicat.com/link/Blog/Image/2020/20200812/windows_function.jpg" height="626" width="678" /><p>It should be noted that not all relational databases support window functions. SQL Server does support them, while MySQL introduced them in version 8.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to remove duplicates from individual fields that are part of a larger group using GROUP BY and Windows functions.  There are undoubtedly many other ways of achieving the same end goal, but these two tried-and-true techniques should serve you well.</p><p>Interested in Navicat Premium? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Splitting Query Results into Ranges</title>
<link>https://www.navicat.com/company/aboutus/blog/1646-splitting-query-results-into-ranges.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Splitting Query Results into Ranges</title></head><body><b>Aug 4, 2020</b> by Robert Gravelle<br/><br/><p>Grouping query results into buckets of equal size is a common requirement for database developers and database administrators (DBAs) alike.  Examples include: </p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>customers whose last names begin with A - L and M-Z</li><li>products prices that are between 1 - 10 dollars, 11 - 20 dollars, 21 - 20 dollars, etc...</li><li>quarterly sales, i.e., from Jan - Mar, Apr - Jun, Jul- Sep, Oct - Dec</li></ul><p>Standard SQL is well suited to this task.  By combining the power of the CASE statement with the GROUP BY clause, data can be broken up into whatever range we deem necessary to best interpret our data.  In today's blog, we'll compose a couple of range queries in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s excellent Query Editor. </p><h1 class="blog-sub-title">Splitting Grades into Percentiles</h1><p>Our first example will require a table containing the grades of several students.  Here's the SQL to create and populate the <i>grade</i> table:</p><pre>DROP TABLE IF EXISTS `grade`;CREATE TABLE `grade`  (  `StuID` int(11) NULL DEFAULT NULL,  `Semester` tinyint(4) NULL DEFAULT NULL,  `YEAR` int(11) NULL DEFAULT NULL,  `Marks` int(11) NULL DEFAULT NULL) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (110, 1, 2018, 66);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (110, 3, 2018, 77);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (110, 2, 2018, 86);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (110, 4, 2018, 69);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (100, 1, 2018, 20);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (100, 2, 2018, 39);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (100, 3, 2018, 65);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (100, 4, 2018, 70);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (99, 1, 2018, 50);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (99, 2, 2018, 45);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (99, 3, 2018, 90);INSERT INTO `grade`(`StuID`, `Semester`, `YEAR`, `Marks`) VALUES (99, 4, 2018, 96);</pre><p>Here is the <i>grade</i> table in Navicat:</p><img alt="grade_table (89K)" src="https://www.navicat.com/link/Blog/Image/2020/20200804/grade_table.jpg" height="515" width="531" /><p>Let's say that we wanted to count students' grades by equal percentile quadrants as follows:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>0 to 25</li><li>26 - 50</li><li>51 to 75</li><li>76 to 100</li></ul><p>Here's the query to do that, along with the results generated:</p><img alt="student_marks (60K)" src="https://www.navicat.com/link/Blog/Image/2020/20200804/student_marks.jpg" height="394" width="543" /><p>Pay attention to the CASE statement and you'll notice that it defines each ranges using the BETWEEN operator. It selects values within the inclusive range, meaning that the outer values are included in the range. BETWEEN works with many types of data, including numbers, text, and dates.</p><h1 class="blog-sub-title">Working with Dates</h1><p>In many cases, dates can be split into logical segments using one of the DATE type's many date part functions, such as DAY(), DAYOFMONTH(), DAYOFWEEK(), DAYOFYEAR(), MONTH(), YEAR(), etc. These allow you to break up your ranges by intuitive units.</p><p>To demonstrate, here's a query in MySQL using the Sakila Sample Database that calculates the average rental cost for each customer, broken down by year and month:</p><img alt="average_rental_cost (163K)" src="https://www.navicat.com/link/Blog/Image/2020/20200804/average_rental_cost.jpg" height="725" width="602" /><p>The advantage to using DATE functions is that they allow us to dispense with the CASE statement because we can GROUP BY the same functions.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to write range queries using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s excellent Query Editor.  Interested in Navicat Premium? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Using Output Parameters in Stored Procedures</title>
<link>https://www.navicat.com/company/aboutus/blog/1645-using-output-parameters-in-stored-procedures.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Using Output Parameters in Stored Procedures</title></head><body><b>Jul 29, 2020</b> by Robert Gravelle<br/><br/><p>Output parameters are a feature of stored procedures that is seldom used, which is a shame because they are an excellent option for returning scalar data to the user. In today's blog, we'll learn some uses for Output Parameters and how to use them in your stored procedures. </p><h1 class="blog-sub-title">Syntax</h1><p>The exact syntax for declaring parameters differs somewhat from one database vendor to another, so let's look at a couple of different examples. Here's one in SQL Server that simply parrots back the input parameter to the user:</p><img alt="ParrotProcedure_SQL_Server (55K)" src="https://www.navicat.com/link/Blog/Image/2020/20200729/ParrotProcedure_SQL_Server.jpg" height="317" width="556" /><p>In MySQL, there are slight differences in syntax, such as the IN/OUT parameters being located before the parameter names:</p><img alt="ParrotProcedure_MySQL (32K)" src="https://www.navicat.com/link/Blog/Image/2020/20200729/ParrotProcedure_MySQL.jpg" height="196" width="501" /><p> Some relational database (RDBMS), such as MySQL, support INOUT parameters. These are a combination of IN and OUT parameters, in that the calling program first passes in the INOUT parameter, and then, the stored procedure modifies it before sending the updated value to the calling program. Other RDMBS, such as SQL Server, treat OUT parameters like an INOUT by allowing them to be passed in to the procedure.</p><h1 class="blog-sub-title">An Slightly More Complex Example (in MySQL)</h1><p>The Sakila Sample Database was originally created as a learning tool for MySQL, but has since been ported to other DBMS as well. It's themed around a fictional video rental store, and contains a number of user functions and stored procedures.  Some of these, such as the film_in_stock procedure, includes both IN and OUT parameters. Here is its definition in Navicat Premium:</p><img alt="film_in_stock_mysql (123K)" src="https://www.navicat.com/link/Blog/Image/2020/20200729/film_in_stock_mysql.png" height="318" width="602" /><p>The film_in_stock stored procedure determines whether any copies of a given film are in stock at a given store. As such, it declares two input parameters - the film ID and store ID - as well as an output parameter that relays the count of films in stock. A user function could have been employed for this purpose, but a procedure can also list the IDs of every film in stock. That's why there are two SELECT statements in the procedure body (between the BEGIN and END delimiters). The first SELECT fetches the film IDs, while the second populates the output parameter with the number of found rows.</p><h3>Running the film_in_stock Stored Procedure</h3><p>In Navicat, we can run a procedure directly from the designer via the Execute button. Clicking it brings up a dialog for entering input parameters:</p><img alt="input_params_dialog (2K)" src="https://www.navicat.com/link/Blog/Image/2020/20200729/input_params_dialog.png" height="160" width="418" /><p>A stored procedure may return multiple result sets and/or output parameters, so to deal with this, Navicat shows each in its own Result tab. The first tab shows the result set produced by the first query in the procedure, i.e., the inventory IDs of films that are in stock:</p><img alt="film_in_stock_result_set (11K)" src="https://www.navicat.com/link/Blog/Image/2020/20200729/film_in_stock_result_set.png" height="275" width="366" /><p>The second tab shows the count of films in stock at the store identified by the p_store_id input parameter (basically the number of rows returned by the first query):</p><img alt="film_in_stock_output_param (5K)" src="https://www.navicat.com/link/Blog/Image/2020/20200729/film_in_stock_output_param.png" height="129" width="349" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we saw how the flexibility provided by the combination of input/output parameters and result sets makes stored procedures a truly powerful tool in the database developer's arsenal.</p></body></html>]]></description>
</item>
<item>
<title>Hiding Databases From Users in MySQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1644-hiding-databases-from-users-in-mysql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Hiding Databases From Users in MySQL</title></head><body><b>Jul 23, 2020</b> by Robert Gravelle<br/><br/><p>Theres an adage for user privileges that you should assign a user the least amount of privileges that he or she requires to perform their job function(s) and no more. That is why MySQL offers such a fine-grained access control system.  While not the easiest system to grasp, once a DBA does, he or she tends to agree that it really is quite effective. In today's blog, we'll learn how to prevent a user from listing databases in MySQL.</p>    <h1 class="blog-sub-title">Working with the mysql.user Table</h1><p>The mysql.user table contains information about users that have permission to access the MySQL server, along with their global privileges. Although it is possible to directly query and update the user table, it's best to use GRANT and CREATE USER for adding users and privileges. To see the contents of the user table, we can use the DESC command:</p><img alt="user_table (37K)" src="https://www.navicat.com/link/Blog/Image/2020/20200723/user_table.png" height="824" width="525" /><p>The privilege that allows a user to obtain a list of databases via the SHOW_DATABASES command is Show_db_priv. Hence, on a new database, we can simply not add the user to it to prevent the user from seeing it. Otherwise, you can see which privileges a user currently has by issuing the SHOW GRANTS command:</p><pre>SHOW GRANTS FOR 'bob_s'@'localhost';</pre><p>Here's some example output for the "bob_s@localhost" user in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>:</p><img alt="show_grants_for_bob_s (74K)" src="https://www.navicat.com/link/Blog/Image/2020/20200723/show_grants_for_bob_s.png" height="255" width="602" /><h1 class="blog-sub-title">Revoking a User Privilege</h1><p>The above output confirms that bob_s does have the SHOW DATABASES privilege.  If we now wanted to remove that privilege, we can issue the REVOKE command:</p><pre>REVOKE Show_db_privON sakilaFROM bob_s'@'localhost;</pre>    <p>In Navicat, we can set a user's privileges both at the server and database level on the Server Privileges and Privileges tabs of the user details. To access them, click the User button on the main button bar, select the user that you're interested in, and then click the Privilege Manager button on the Objects toolbar:</p><img alt="privilege_manager (32K)" src="https://www.navicat.com/link/Blog/Image/2020/20200723/privilege_manager.jpg" height="271" width="366" /><p>Here are bob_s@localhost's server-level privileges (including SHOW DATABASES):</p><img alt="server_level_privileges (14K)" src="https://www.navicat.com/link/Blog/Image/2020/20200723/server_level_privileges.png" height="671" width="358" /><p>To revoke the SHOW DATABASES privilege, we can simply uncheck the box beside the SHOW DATABASES label, and click the Save button.</p><p>Here are bob_s@localhost's privileges for the sakila database:</p><img alt="privileges_for_sakila_db (33K)" src="https://www.navicat.com/link/Blog/Image/2020/20200723/privileges_for_sakila_db.png" height="81" width="602" /><p>There, we can set also database-specific privileges such as those to create views, show views, drop tables, execute INSERT statements, etc.  We can even manage privileges at the table and column level!</p><img alt="add_privilege_dialog (59K)" src="https://www.navicat.com/link/Blog/Image/2020/20200723/add_privilege_dialog.jpg" height="394" width="586" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we saw how to prevent a user from listing databases in MySQL, both via the MySQL REVOKE command and using Navicat's Server Privileges and Privileges tabs.  Which is the easier of the two is up for debate, but I personally find the checkbox approach more intuitive.</p><p>To learn more about managing users in Navicat, take a look at the Manage MySQL Users in Navicat Premium series:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li><a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/728-manage-mysql-users-in-navicat-premium-part-1-securing-the-root" target="_blank">Part 1: Securing the Root</a> </li><li><a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/730-manage-mysql-users-in-navicat-premium-part-1-securing-the-root-2" target="_blank">Part 2: Creating a New User</a> </li><li><a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/738-manage-mysql-users-in-navicat-premium-part-3-configuring-user-privileges" target="_blank">Part 3: Configuring User Privileges </a></li><li><a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/745-manage-mysql-users-in-navicat-premium-part-4-the-privilege-manager-tool" target="_blank">Part 4: The Privilege Manager tool </a></li></ul><p>Interested in Navicat for MySQL? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-mysql" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Selecting Rows That Have One Value but Not Another</title>
<link>https://www.navicat.com/company/aboutus/blog/1602-selecting-rows-that-have-one-value-but-not-another.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Selecting Rows That Have One Value but Not Another</title></head><body><b>Jul 6, 2020</b> by Robert Gravelle<br/><br/><p>Fetching rows that have a particular value, but not others, is a fairly common task in database development and administration.  It sounds like a walk in the park, but limiting the results to those rows that possess one value to the exclusion of others is trickier than it sounds.  The reason is, while it's trivial to filter out values using the != not equals or NOT IN comparison operators, these only hide values rather than tell us whether or not an entity possesses these other values.  The good news is that there's an easy way to do it.  Read on to find out how!</p><h1 class="blog-sub-title">Selecting Users by Role</h2><p>One thing that all databases - and most applications - have are users. In particular, database users tend to have different roles. (Although application users may also have roles.) Here's an example of such a table in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>'s Table Designer:</p><img alt="user_roles_table_design (92K)" src="https://www.navicat.com/link/Blog/Image/2020/20200706/user_roles_table_design.jpg" width="800" height=auto /><p>In this case, the role_id would be a Foreign Key (FK) that links to a roles table that would store additional information about each role.  In the users table, the inclusion of the role_id leads to the possibility of having multiple rows for each user.  Keep that in mind, because that idea will be revisited a little later on...</p><img alt="user_roles_table (37K)" src="https://www.navicat.com/link/Blog/Image/2020/20200706/user_roles_table.jpg" height="280" width="309" /><h1 class="blog-sub-title">The Wrong Way to List Users That Have One Role but No Others</h2><p>If were were to now list users who possess a particular user role, and only that role, we might be tempted to write something like the following:</p><img alt="role_id_equal_to_1 (33K)" src="https://www.navicat.com/link/Blog/Image/2020/20200706/role_id_equal_to_1.jpg" height="280" width="386" /><p>The problem is that the above query only lists users who have a role_id of 1.  It does nothing to address whether or not they have other role_ids as well.  Moreover, adding another criteria such as <code>AND role_id NOT IN (2,3,4,5,6,7,8,9)</code> does nothing to help because 1 is obviously not any other number!</p><p>So, how do we limit users to those that have only a role_id of 1 and no others?</p><h1 class="blog-sub-title">The RIGHT Way to List Users That Have One Role but No Others</h2><p>Well, this is technically not "The RIGHT Way" because there are surely others.  This solution consists of counting the number of rows for each user.  The idea is that, if a user has the role_id that we're interested in AND only has one row in the table, then they're someone we want to see in the results. </p><p>We can obtain a count of user_ids for each user in the table using a GROUP BY.  Then, the HAVING clause can check that a user only has one row and that his/her one role_id is the one we want:</p><img alt="final_query (31K)" src="https://www.navicat.com/link/Blog/Image/2020/20200706/final_query.jpg" height="268" width="379" /><p>Now we only see the one user who has a role_id of 1, and no others!</p><h1 class="blog-sub-title">Conclusion</h2><p>One good thing about this approach is that it can easily be modified to find rows that contain multiple values or more than a certain number of rows in the table. </p><p>Interested in Navicat Premium? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Using a Case Statement in a Where Clause</title>
<link>https://www.navicat.com/company/aboutus/blog/1546-using-a-case-statement-in-a-where-clause.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Using a Case Statement in a Where Clause</title></head><body><b>Jun 23, 2020</b> by Robert Gravelle<br/><br/><p>A short time ago we were introduced the incredibly useful and versatile <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1249-using-the-sql-case-statement" target="_blank">Case Statement</a>. In that blog, we employed the Case Statement as most DBAs and developers do, in the SELECT clause. Another way to use the Case Statement is within the WHERE clause.  There, it may be utilized to alter the data fetched by a query based on a condition.  Within that context, the Case Statement is ideally suited to both static queries, as well as dynamic ones, such as those that you would find inside a stored procedure.  In today's blog, we'll create a SELECT query in Navicat Premium that returns rows based on the values of another field.</p><h1 class="blog-sub-title">Listing Films by Rental Duration</h1><p>Before getting to the CASE Statement, let's start with a query that returns a list of movies from the from the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a>. It's a MySQL database that contains a number of tables, views, and queries related to a fictional video rental store. Tables include actor, film, customer, rentals, etc.by film id, title, rental rate, and a rental duration of 5 days.</p><p>Our query displays the film id, title, rental rate, and a rental duration columns and narrows down the field to those films whose rental_duration is exactly 5 days. Here's is the statement:</p><pre>SELECT film_id, title, rental_rate, rental_duration FROM film WHERE rental_duration = 5 ORDER BY rental_rate DESC;</pre><p>Executing the query in Navicat Premium brings up the following results:</p><img alt="films_by_duration (106K)" src="https://www.navicat.com/link/Blog/Image/2020/20200623/films_by_duration.jpg" height="640" width="427" /><h1 class="blog-sub-title">Setting Rental Duration Based On Rental Rate</h1><p>A Case Statement comes in handy for choosing a value based on several other possible values.  For instance, let's suppose that we wanted to set the rental_duration based on the rental_rate.  We could do that using a Case Statement like so:</p><pre>SELECT film_id, title, rental_rate, rental_duration FROM film WHERE rental_duration = CASE rental_rate                        WHEN 0.99 THEN 3                        WHEN 2.99 THEN 4                        WHEN 4.99 THEN 5                        ELSE 6                        END ORDER BY title DESC;</pre><p>That has the effect of associating rental_rates with the rental_durations.  Hence:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>When the rental_rate is equal to 0.99, only include films whose rental_duration is equal to 3.</li><li>When the rental_rate is equal to 2.99, only include films whose rental_duration is equal to 4.</li><li>When the rental_rate is equal to 4.99, only include films whose rental_duration is equal to 5.</li><li>For any other rental_rate, only include films whose rental_duration is equal to 6.</li></ul><p>We can see the results in the screen capture below:</p><img alt="films_by_duration_using_case (145K)" src="https://www.navicat.com/link/Blog/Image/2020/20200623/films_by_duration_using_case.jpg" height="828" width="444" /><p>Notice how whenever a film has a rental rate of 0.99, the rental_duration is always 3. Likewise, films with a rental rate of 2.99 all have a  rental_duration of 4, etc...</p><h1 class="blog-sub-title">Rewriting the CASE Statement</h1><p>Remember that the Case Statement is only an anternative way of combining two or more OR conditions.  As such, we can rewrite our query without the Case Statement, but, as you can see, it takes a lot more SQL:</p><pre>SELECT film_id, title, rental_rate, rental_duration FROM film WHERE rental_rate = 0.99 AND rental_duration = 3  OR rental_rate = 2.99 AND rental_duration = 4  OR rental_rate = 4.99 AND rental_duration = 5  OR rental_rate NOT IN (0.99, 2.99, 4.99) AND rental_duration = 6ORDER BY title DESC;</pre><p>Here are the results. Compare these with that of the CASE query:</p><img alt="films_by_duration_using_or (163K)" src="https://www.navicat.com/link/Blog/Image/2020/20200623/films_by_duration_using_or.jpg" height="833" width="507" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we created a SELECT query in Navicat Premium that returns a list of films using a CASE Statement in the WHERE clause. Interested in Navicat Premium? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Troubleshooting Slow Query Execution with Navicat Monitor 2</title>
<link>https://www.navicat.com/company/aboutus/blog/1545-troubleshooting-slow-query-execution-with-navicat-monitor-2.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Troubleshooting Slow Query Execution with Navicat Monitor 2</title></head><body><b>Jun 11, 2020</b> by Robert Gravelle<br/><br/><p>With so many factors to consider, uncovering the root cause(s) of slow query execution takes an organized approach. Luckily, with a bit of effort, you can pin down an issue to one of the more common culprits by checking up on a few things. In today's blog, we'll learn how Navicat Monitor 2 can help you get the the bottom of slow query execution - fast!</p><h1 class="blog-sub-title">Network Issues</h1><p>Database servers are designed to be accessed over a network, be it an internal or external one, like the World-wide Web. As such, the occasional dropping of a connection, or even outages that last for hours or days are to be expected, on occasion. Good performance in a local environment is a promising sign, but isn't necessarily enough to exclude network issues entirely, as the server itself could be overloaded. You can test for that using a monitoring tool that can track server OS metrics like CPU processes and memory. That's where <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a> can help, by tracking O/S metrics. </p><p>On Windows Type servers, you can configure CPU &amp; Memories section to monitor O/S metrics over Simple Network Management Protocol (SNMP):</p><img alt="edit_instance_dialog (74K)" src="https://www.navicat.com/link/Blog/Image/2020/20200611/edit_instance_dialog.jpg" height="801" width="784" /><p>Doing so will cause server metrics like CPU, Memory, and Disk Usage to appear in Dashboard Instance Cards:</p><img alt="dashboard_with_cpu_metrics (58K)" src="https://www.navicat.com/link/Blog/Image/2020/20200611/dashboard_with_cpu_metrics.jpg" height="613" width="466" /><p>You can also click on the system metrics to see more details, including Swap Usage, Connections, and Network Throughput.  Each metric includes an interactive chart:</p><img alt="system_metrics (50K)" src="https://www.navicat.com/link/Blog/Image/2020/20200611/system_metrics.jpg" height="599" width="618" /><h1 class="blog-sub-title">Query Monitoring</h1><p>Once you've ruled out network issues, it's time to take a closer look at the query itself. A query can be functionally correct in that it fetches the correct data, but still be deficient by doing so in an efficient manner. It's essential to design your queries in such a way as to maximize efficiency because, depending on the database engine, all queries are likely to be run sequentially as a queue. Case in point MySQL's MyISAM engine acquires a table level lock when executing queries in order to protect data integrity during transactions. During that time, other processes/queries must wait their turn while the first query completes. If it's a stalwart, that wait could wind up being a long one!</p><p>Navicat Monitor's Query Analyzer screen to be very helpful in this regard. It shows the summary information of all executing queries and lets you spot problematic queries, including:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">   <li>top queries with cumulative execution time count</li>   <li>slow queries with unacceptable response time</li>   <li>deadlocks (when two or more queries permanently block each other)</li></ul><img alt="query_analyzer (125K)" src="https://www.navicat.com/link/Blog/Image/2020/20200611/query_analyzer.jpg" height="621" width="1023" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how Navicat Monitor 2 can help you get the the bottom of slow query execution - and fast!</p><p>Navicat Monitor is a safe, simple and agentless remote server monitoring tool for MySQL, MariaDB and SQL Server. It includes a rich set of real-time and historical graphs that allow you to drill down into server statistic details. The latest release of Navicat Monitor (version 2.0) now supports SQL Server as well!</p><p>Navicat Monitor version 2.0 is now available for sales at Navicat Online Store and is priced at US$499/token (commercial) and US$199/token (non-commercial). 1 token is needed to unlock 1 MySQL Server / 1 MariaDB Server / 1 SQL Server.</p><p>Click <a class="default-links" href="https://www.navicat.com/en/discover-navicat-monitor" target="_blank">here</a> for more details about all of Navicat Monitor's features, or, <a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">download</a> the 14-day fully functional free trial!</p></body></html>]]></description>
</item>
<item>
<title>Counting String Occurrences in SQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1544-counting-string-occurrences-in-sql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Counting String Occurrences in SQL</title></head><body><b>June 5, 2020</b> by Robert Gravelle<br/><br/><p>Although not as proficient at string manipulation as procedural programming languages such as Java, C++, and PHP, SQL does provide many functions for working with string data. These may be employed to trim off extra spaces or characters, determine how long a string is, and concatenate several field values together. String functions are well worth becoming acquainted with as they can help make your code more effective and readable. In today's blog, we'll learn how to count the number of string occurrences within a char, varchar or text field using a couple of native SQL string functions.</p><h1 class="blog-sub-title">Introducing the LENGTH() and REPLACE() Functions</h1><p>The two functions that we'll be using here today are LENGTH(str) and REPLACE(str, from_str, to_str). LENGTH() returns the length of a string in bytes; REPLACE() returns the string str with all occurrences of the string from_str replaced by the string to_str, by performing case-sensitive matching.</p><p>The LENGTH() function returns the length of a string in bytes. This has some important ramifications because it means that for a string containing five 2-byte characters, LENGTH() returns 10. To count straight characters, use CHAR_LENGTH() instead. </p><p>Here's an example:</p><img alt="length_function (33K)" src="https://www.navicat.com/link/Blog/Image/2020/20200605/length_function.jpg" height="266" width="543" /><p>Here's an example of the REPLACE() function that changes the protocol of a URL from "http" to "https":</p><img alt="replace_function (41K)" src="https://www.navicat.com/link/Blog/Image/2020/20200605/replace_function.jpg" height="264" width="539" /><h1 class="blog-sub-title">Let's Get Counting</h1><p>By combining LENGTH() and REPLACE() with the ROUND() function, we can obtain a count of a specific sub-string in a field that contains textual content.  Here's an example using the Sakila Sample database that returns the count of the word "Documentary" in the description field of the film table:</p><img alt="count_occurrences (172K)" src="https://www.navicat.com/link/Blog/Image/2020/20200605/count_occurrences.jpg" height="501" width="765" /><p>In essence, our query replaces occurrences of the target sub-string with an empty ("") string and compares the resulting string lengths. The difference between them is the number of occurrences of the sub-string in the source field.</p><h1 class="blog-sub-title">Incorporating Our Query Into a User Function</h1><p>If you plan on performing word counts on many different tables or using a variety of sub-string values, you should consider incorporating the main calculation into a custom User Function. Here's a function, named `count_string_instances`, that I created in Navicat:</p><img alt="count_occurrences_function (84K)" src="https://www.navicat.com/link/Blog/Image/2020/20200605/count_occurrences_function.jpg" height="402" width="791" /><h3 class="blog-sub-title">Testing the Function</h3><p>We can test our function in-place by clicking the <i>Execute</i> button.  That opens a dialog to accept input parameters:</p><img alt="input_param_dialog (21K)" src="https://www.navicat.com/link/Blog/Image/2020/20200605/input_param_dialog.jpg" height="160" width="418" /><p>The results confirm that the function is working correctly:</p><img alt="count_occurrences_function_test_result (18K)" src="https://www.navicat.com/link/Blog/Image/2020/20200605/count_occurrences_function_test_result.jpg" height="110" width="358" /><h3 class="blog-sub-title">Invoking Our Function from the Query</h3><p>With our function in place, we can replace the calculation portion of the query with a call to the count_string_instances() function.  As we begin to type the function name, the Navicat auto-suggest list now includes our function!</p><img alt="auto_complete (49K)" src="https://www.navicat.com/link/Blog/Image/2020/20200605/auto_complete.jpg" height="284" width="515" /><p>As with all functions, it is inserted into our query with input parameters ready to set.  We can navigate between them via the TAB key:</p><img alt="auto_complete_fields (43K)" src="https://www.navicat.com/link/Blog/Image/2020/20200605/auto_complete_fields.jpg" height="172" width="675" /><p>Here's the updated query with results:</p><img alt="query_with_function (181K)" src="https://www.navicat.com/link/Blog/Image/2020/20200605/query_with_function.jpg" height="538" width="687" /><h1 class="blog-sub-title">Conclusion</h1><p>There are many SQL string functions that can help make your code more effective and readable. These can be especially powerful when combined. In today's blog, we learn how to count the number of string occurrences within a char, varchar or text field by creating a custom user function using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat</a>'s versatile Function and Stored Procedure Editor.</p></body></html>]]></description>
</item>
<item>
<title>MySQL Default Values: Good or Bad? - Part 2: When To Use Them</title>
<link>https://www.navicat.com/company/aboutus/blog/1543-mysql-default-values-good-or-bad-part-2-when-to-use-them.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>MySQL Default Values: Good or Bad? - Part 2: When To Use Them</title></head><body><b>May 28, 2020</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 2: When To Use Them</h1><p>You probably already know that setting a default value on non-null columns helps get rid of those pesky "Field 'xyz' doesn't have a default value" errors. Hopefully you're also aware that keeping error messages at bay is not in-itself a valid reason for supplying default values.  There are many reasons for providing default column values - some good, and some, less so.  Part 1 explored the ramifications of MySQL's Strict SQL Mode, as well as how to view and set it using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL 15</a>. In today's follow-up blog, we'll tackle when to use default values, and how to come up with good ones.</p><h1 class="blog-sub-title">Why Not Just Allow Nulls?</h1><p>Nullable columns don't present the same challenges as non-null ones do, so why not allow nulls in all non-key columns?  In many instances, the point of applying the non-null constraint to a column is to force the application or system that populates it to supply a value.  Other times, a non-null column might contain audit information, such as the user ID or a timestamp.  In either case, you're looking for valid data, and not just filler.  </p><p>That's an important consideration because it drives home the importance of generating useful defaults as well as front-end validation.  I can still remember my first web application.  It collected user details such as names, emails, and phone numbers. All of these fields were required, so clever users found all sorts of ways to circumvent entering their real information, such as entering phone numbers of 111-111-1111 and names such as "Elmer J. Fudd". </p><h1 class="blog-sub-title">Generating a Timestamp</h1><p>Now that we've gone over some reasons why automatically populating fields is worth doing any time you can do so, let's look at a common example of a generated value: an audit timestamp.</p><p>Several of the tables in the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a> feature a last_update column. These employ the timestamp data type; its value is set to the output of the MySQL CURRENT_TIMESTAMP function.  In Navicat (<a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Premium</a> pictured below), you can set the default value via a drop-down list:</p><img alt="last_update (135K)" src="https://www.navicat.com/link/Blog/Image/2020/20200528/last_update.jpg" height="628" width="866" /><p>The <i>Default</i> value sets the timestamp on record creation, whereas checking the <i>On Update Current_Timestamp</i> box tells MySQL to update the timestamp on every UPDATE operation.</p><h1 class="blog-sub-title">Sentinel Values</h1><p>In RDBMS, a sentinel value is one that has a special meaning.  For instance, a value of 999 in an age column would signify that it is unknown. I've also seen applications that employed "1900-01-01" for unknown dates. Sentinel values can be useful in cases where you want to assign a value of "unknown", whereas nulls mean "no value". Not everybody is fond of sentinel values because people and applications that work with the database have to be aware of all sentinel values in order to handle them properly.  </p><h1 class="blog-sub-title">Conclusion</h1><p>While default - and by extension - sentinel values have their place in good database design and development, it's worth considering each value's purpose before assigning a value.  Simply relying on default values to avoid working with nulls is probably not a good enough reason to do so.</p></body></html>]]></description>
</item>
<item>
<title>MySQL Default Values: Good or Bad? - Part 1: Strict SQL Mode</title>
<link>https://www.navicat.com/company/aboutus/blog/1536-mysql-default-values-good-or-bad-part-1-strict-sql-mode.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>MySQL Default Values: Good or Bad? - Part 1: Strict SQL Mode</title></head><body><b>May 22, 2020</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 1: Strict SQL Mode</h1><p>Getting errors when you don't supply a value for a non-null column can be an immense source of frustration.  There's a way to minimize the occurrence of such errors by setting a default value for those columns.  Seems like an easy fix, but, as in all things, the devil's in the details.  You have to be careful that you don't add a bunch of generic - and useless - data to your tables just for the sake of making INSERTs easier.  In today's blog, we'll learn about the ramifications of MySQL's Strict SQL Mode, as well as how to view and set it using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL 15</a>.  In part 2 we'll cover when it makes sense to employ default values (and when it doesn't).</p><h1 class="blog-sub-title">Strict SQL Mode and Adjusted Values</h1><p>In MySQL, you can control how MySQL handles invalid or missing values in data-change statements such as INSERT or UPDATE by turning on Strict SQL Mode. A value is missing when a new row to be inserted does not contain a value for a non-NULL column that has no explicit DEFAULT clause in its definition. If strict mode is not in effect, MySQL inserts <i>adjusted values</i> for both invalid or missing values and produces warnings.  Examples of adjusted values would be an empty string, zero, and a timestamp/date of 0000-00-00 00:00:00.</p><p>It should be fairly obvious that adjusted values could undermine the whole point of having defaults.  Hence, it's usually good idea to activate strict SQL mode and provide a default value where appropriate. In Navicat, you can check your current value for SQL Mode on the Variables tab of the Server Monitor.  You'll find it under Tools &gt; Server Monitor in the main menu.</p><img alt="sql_mode_variable (119K)" src="https://www.navicat.com/link/Blog/Image/2020/20200522/sql_mode_variable.jpg" style="max-width: 800px; height: auto;"/><p>Strict SQL mode is enabled if either STRICT_ALL_TABLES or STRICT_TRANS_TABLES is present.  When deciding on which to use, note that the latter is more forgiving as, if a value is missing, MySQL inserts the appropriate adjusted value for the column data type and generates a warning rather than an error. Moreover, processing of the statement continues.  Meanwhile, invalid values are converted to the closest valid value.</p><h1 class="blog-sub-title">Strict SQL Mode in Action</h1><p>Let's compare Strict SQL Mode to the default SQL mode using the Sakila Sample Database.  The actor table does not allow nulls in any column, as evidenced by the checkboxes under the <i>Not null</i> header:</p><img alt="actor_table_definition (62K)" src="https://www.navicat.com/link/Blog/Image/2020/20200522/actor_table_definition.jpg" height="242" width="795" /><p>If we disable Strict SQL Mode for the current session using the SET command and perform an INSERT that only supplies the last_name, the database accepts it, but provides an empty string for the first_name: </p><img alt="insert_statement (43K)" src="https://www.navicat.com/link/Blog/Image/2020/20200522/insert_statement.jpg" height="281" width="510" /><p><img alt="actor_with_missing_first_name (31K)" src="https://www.navicat.com/link/Blog/Image/2020/20200522/actor_with_missing_first_name.jpg" height="279" width="307" /></p><p>If we re-activate Strict Mode, the same INSERT now fails with an error message:</p><img alt="actor_with_missing_first_name_strict_mode (61K)" src="https://www.navicat.com/link/Blog/Image/2020/20200522/actor_with_missing_first_name_strict_mode.jpg" height="280" width="690" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned about the ramifications of MySQL's Strict SQL Mode, as well as how to view and set it using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL 15</a>.  In part 2 we'll cover when and when it doesn't make sense to employ default values.</p></body></html>]]></description>
</item>
<item>
<title>Top N Queries by Group</title>
<link>https://www.navicat.com/company/aboutus/blog/1643-top-n-queries-by-group.html</link>
<description><![CDATA[<html><head><title>Top N Queries by Group</title></head><body><b>May 14, 2020</b> by Robert Gravelle<br/><br/><p>A Top N query is one that fetches the top records, ordered by some value, in descending order. Typically, these are accomplished using the TOP or LIMIT clause. Problem is, Top N result sets are limited to the highest values in the table, without any grouping. The GROUP BY clause can help with that, but it is limited to the single top result for each group.  If you want the top 5 per category, GROUP BY won't help by itself.  That doesn't mean it can't be done.  In fact, in today's blog, we'll learn exactly how to construct a Top N query by group.</p><h1 class="blog-sub-title">Top N Query Basics</h1><p>To gain a better understanding of a the Top N Query, let's compose one that selects the top 5 films with the longest running times from the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a>. If you aren't familiar with the Sakila database, it's a MySQL database that contains a number of tables, views, and queries related to a fictional video rental store. Tables include actor, film, customer, rentals, etc.</p><img alt="top_n (77K)" src="https://www.navicat.com/link/Blog/Image/2020/20200514/top_n.jpg" height="405" width="739" /><h1 class="blog-sub-title">Grouping Results by Category</h1><p>The GROUP BY clause applies an aggregate function to one or more fields so that the data relates to the groupings that you specify. It's a step forward in terms of grouping results, but GROUP BY still has a couple of limitations:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li>it only provides the very first result (i.e. row) per group and ignores others,</li>    <li>the columns are limited to those included in the grouping criteria and aggregated field(s). All other columns are not accessible.</li></ul><p>This query uses GROUP BY to show the longest running film for each rating:</p><img alt="group_by (47K)" src="https://www.navicat.com/link/Blog/Image/2020/20200514/group_by.jpg" height="412" width="554" /><p>Notice that we can't include the film title title, because it is not part of either the GROUP BY or aggregated field(s).</p><h1 class="blog-sub-title">Crash Course in Windows Functions</h1><p>The term "window" in Windows Functions refers to the set of rows on which the function operates because the function uses values from the rows in a window to calculate the returned values.  The set of rows within the window are aggregated into a single value.  </p><p>To use a window function in a query, you have to define the window using the OVER() clause. It does 2 things:</p><ol>  <li>Defines window partitions to form groups of rows, via the PARTITION BY clause.</li>  <li>Orders rows within a partition, via the ORDER BY clause.</li></ol>    <p>A query can include multiple window functions with the same or different window definitions.</p><p>Our query uses the ROW_NUMBER() window function. It assigns a sequential integer number to each row in the query's inner window result set. We can use that value to limit the results for each rating to the top 5.  That's done by ordering the length in descending order.</p><img alt="row_number (116K)" src="https://www.navicat.com/link/Blog/Image/2020/20200514/row_number.jpg" height="736" width="555" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to construct a query that fetches the top 5 rows per category in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>.  Version 15 adds over 100 enhancements and includes several new features to give you more ways that ever to build, manage, and maintain your databases than ever before!</p></body></html>]]></description>
</item>
<item>
<title>Is the Database or Application the Best Place for a Custom Function?</title>
<link>https://www.navicat.com/company/aboutus/blog/1393-is-the-database-or-application-the-best-place-for-a-custom-function.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Is the Database or Application the Best Place for a Custom Function?</title></head><body><b>May 6, 2020</b> by Robert Gravelle<br/><br/><p>Deciding whether do create a function in the database or in application code can be a daunting one. All-too-often, you don't realize that you've made the wrong choice until it's a big hassle to make an about-face. Worse still is the fact that many developers base their decision on whether they're most familiar with SQL or application coding!  A better approach is to rely on the strengths of a technology to help guide your decision.  In today's blog, we'll break down the decision making process when choosing between a user-defined function (UDF) and one that resides on the application side. </p><h1 class="blog-sub-title">Database Power!</h1><p>There are things that databases can do well, and things that they struggle with. Like stored procedures, functions are written in SQL.  As such, they will excel at tasks where SQL shines. Here's a list of such tasks, along with why they are best done in SQL as opposed to application code:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li>joins: in application code, this could require complex array manipulation</li>    <li>filtering data (i.e., where clause): in code, this could require heavy inserting and deleting of items in lists</li>    <li>selecting columns: again, in application code, this could require heavy list or array manipulation</li>    <li>aggregate functions: in application code, this could require arrays to hold values and complex switch cases</li>    <li>foreign key integrity: in application code, this could require queries prior to insert and assumes that no one will access the data outside app    <li>primary key integrity - in application code, this could also require queries prior to insert and assumes that no one will access the data outside the app</li></ul><p>Attempting to do any of the above rather than relying in SQL inevitably leads to writing a lot of code and reduced efficiency, which translates to more code to debug and maintain as well as poor application performance.</p><p>On the flip-side, DBMS do not excel at complex procedural processing; that's the domain of application code. It's a big reason why the debugging facilities of an Integrated Development Environment (IDE) such as VS Code or Eclipse are far superior to anything you will find in a database development environment.</p><h1 class="blog-sub-title">A Case Study</h1><p>The <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a> was developed as a learning tool and has been widely shared throughout the database community. It's a MySQL database that contains a number of tables, views, stored procedures, and functions pertaining to a fictional video rental store. One of those functions is called inventory_in_stock. It's a UDF that accepts an inventory_id and returns a boolean value indicating whether or not that film is in stock.</p><p>Here is the inventory_in_stock function definition in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s Function Designer:</p><img alt="inventory_in_stock_function (120K)" src="https://www.navicat.com/link/Blog/Image/2020/20200506/inventory_in_stock_function.jpg" height="519" width="894" /><p>Let's quickly run it to see how it works.</p><p>Clicking the Execute button brings up a dialog to accept the input parameters:</p><img alt="inventory_in_stock_param_dialog (14K)" src="https://www.navicat.com/link/Blog/Image/2020/20200506/inventory_in_stock_param_dialog.jpg" height="132" width="418" /><p>Here are the results:</p><img alt="inventory_in_stock_results (18K)" src="https://www.navicat.com/link/Blog/Image/2020/20200506/inventory_in_stock_results.jpg" height="127" width="346" /><p>A value of 1 indicates that the film is in stock</p><p>Now, consider what would happen if we were to replace that function with one that resides in the application. It would need to make two calls to the database to execute SQL statements. That would lead to additional network traffic and require us to maintain SQL within the application. This is a bad practice in general because it mixes database and application code together.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned that you should place your custom function code where it can best benefit from the technology's strengths: within the application where complex procedural processing is required, and in the database where SQL is required.</p><p>Interested in finding out more about <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Managing Databases Remotely Using Navicat - Part III</title>
<link>https://www.navicat.com/company/aboutus/blog/1598-managing-databases-remotely-using-navicat-part-iii.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Managing Databases Remotely Using Navicat - Part III</title></head><body><b>Apr 29, 2020</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part III: Navicat Cloud FAQ</h1><p>While Navicat Cloud has been around for a few years now, it has really come into its own recently, as the Covid-19 pandemic has forced organizations to implement a work from home protocol. We learned the basics of Navicat Cloud in the last blog, Navicat Cloud and Team Collaboration. If you're following along, that was part 2 in the series.  In this last installment, we'll pick up were we left off last week and look at how Navicat Cloud can help your team be more productive while working remotely by answering your biggest questions. </p>   <h1 class="blog-sub-title">How safe is Navicat Cloud?</h1><p>Before I am ready to share anything work-related over an open network, I want to be sure that it's secure. In particular, I wouldn't want someone to learn how to connect to the database using connection data.  Although Navicat Cloud does allow you to store your connection settings, these are not sufficient for connecting to the database. That's because your database passwords and data are not be stored to Navicat Cloud; only connection settings, queries, model files, and virtual groups are ever stored. </p><h1 class="blog-sub-title">Where does Navicat Cloud store connection settings, queries, model files, and virtual groups?</h1><p>Behind the scenes, Navicat Cloud uses Amazon Simple Storage Service (Amazon S3) for all online storage. Files are stored using 256-bit AES encryption and transferred between Navicat applications and the Navicat Cloud service over a secure SSL tunnel.</p><h1 class="blog-sub-title">What happens if I lose my connection settings, queries, model files, and/or virtual group information?</h1><p>Navicat Cloud includes Two-step verification.  That's an additional security feature that provides an advanced authentication solution for your Navicat Cloud account. Should you experience hardware and/or software failures on your computer and mobile devices, rest assured that once your files are synced to the cloud, all your files are kept safe and sound.</p><h1 class="blog-sub-title">How do I view my usage?</h1><p>In Navicat Cloud, each connection, query, model, and virtual group counts as one unit. Your account comes with 150 units of free storage. Once you reach the storage limit, Navicat Cloud stops syncing and displays a warning message. Fear not, you won't lose any information and your files will be synced again automatically once storage space becomes available.  How does that happen?  You can either clean up stored objects to make room or purchase additional storage units. </p><p>You can see how many units you've used up on your Navicat Cloud account from the account profile in Navicat.</p><p>On the desktop version:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">   <li> Sign in to your Navicat Cloud account.</li>   <li> Click on your name from the top right corner to open your account profile window.</li>   <li> Click on View Details.</li></ul><p>On the iOS version:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">   <li> Sign in to your Navicat Cloud account.</li>   <li> Click on your avatar from the top left corner to open your account profile window.</li>   <li> Click on USAGE.</li></ul><p>Here's the usage dialog in Windows 10:</p><img alt="usage (40K)" src="https://www.navicat.com/link/Blog/Image/2020/20200429/usage.jpg" height="467" width="482" /><h1 class="blog-sub-title">Conclusion</h1><p>In this three-part series on working with databases remotely using Navicat, we learned how we can continue to be productive within a collaborative environment while working from home.  </p><p>You can find out more about Navicat Premium 15 <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">here</a>. To learn more about Navicat Cloud, visit the <a class="default-links" href="https://www.navicat.com/en/navicat-cloud" target="_blank">product page</a>.</p></body></html>]]></description>
</item>
<item>
<title>Managing Databases Remotely Using Navicat - Part II</title>
<link>https://www.navicat.com/company/aboutus/blog/1372-managing-databases-remotely-using-navicat-part-ii.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Managing Databases Remotely Using Navicat - Part II</title></head><body><b>Apr 22, 2020</b> by Robert Gravelle<br/><h1 class="blog-sub-title">Part II: Navicat Cloud and Team Collaboration </h1>  <p>As the Covid-19 pandemic wears on, organizations that can support working from home have continued to remain productive while maintaining physical distancing.  In <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/1314-managing-databases-remotely-using-navicat.html" target="_blank">last blog</a> we learned how to access sensitive work data by establishing a secure connection to a remote database via <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 15</a>. Today's follow-up will introduce Navicat Cloud, an add-on feature inside Navicat Development and Administration products for collaborating with team members from across town to around the globe.</p><h1 class="blog-sub-title">What is Navicat Cloud</h1>  <p><a class="default-links" href="https://navicat.com/en/navicat-cloud" target="_blank">Navicat Cloud</a> is a cloud-based service that allows you to synchronize your connection settings, queries, models, and virtual group information across multiple devices. Storing files in Navicat Cloud will cause them to automatically show up in both desktop and mobile versions of Navicat, providing your team with real-time access to files anytime and from anywhere.</p><h1 class="blog-sub-title">Connecting to Navicat Cloud</h1><p>Navicat Cloud was integrated into Navicat products in version 11.1.  Its access point is located under File &gt; Navicat Cloud on the main menu:</p><img alt="file_menu (29K)" src="https://www.navicat.com/link/Blog/Image/2020/20200422/file_menu.jpg" height="391" width="244" /><p>In order to use the service, you'll need to create a Navicat ID and password using the Create Navicat ID link on the Sign In screen.  The ID does double duty as both your registered email and ID.</p><img alt="sign_in (25K)" src="https://www.navicat.com/link/Blog/Image/2020/20200422/sign_in.jpg" height="425" width="482" /><p>Once submitted, a confirmation email will be sent to the email address that you provided.  In the email, click the Activate Now link to login to Navicat Cloud via the Sign In screen.</p><h1 class="blog-sub-title">Added Security using Member Roles</h1><p>Navicat Cloud provides added security by allowing you to assign a role to coworkers for each project they work on. It grants them access to projects based on the role they play. Each role determines whether they can create, view, and modify project files. You can securely share your projects with members; you can also control who can see and edit the project.</p><p>Navicat Cloud defines 4 types of member roles:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Owner: The owner is a project leader who creates the project.  The owner has full privileges on the project and is the only member who can delete the project.</li><li>Admin: The admin is a lead member who handles the administration aspects of the project. Admin had full read/write access to the project to which they are assigned, including the ability to add/remove a project member and change member roles.</li><li>Member: The member is a project member who can read and write all project files. Navicat recommends that you use this role as the default for all members and assign other roles only as needed.</li><li>Guest: The guest is a basic member with read-only access to project files. This role is useful for members who need to view, but not edit, the project.</li></ul><h1 class="blog-sub-title">Navicat Cloud Availability and Pricing</h1><p>Navicat Cloud is available on all Navicat products and all platforms including Windows, macOS, Linux and iOS (iPhone, iPad and iPod).</p><p>It is offered via 2 subscription types:</p><p>The free Basic Plan includes:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>3 Projects</li><li>150 Units</li><li>3 members per project</li></ul><p>For larger projects, there's the Pro Plan.  It costs $9.99 per month or $99 per year and includes:</p> <ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>500 Projects</li><li>5000 Units</li><li>500 members per project</li></ul><h1 class="blog-sub-title">Going Forward</h1><p>Today's blog introduced Navicat Cloud, an add-on feature inside Navicat Development and Administration products for collaborating with team members from across town to around the globe.</p><p>In part 3, we'll learn how easy Navicat Cloud makes it to share files with team members to increase productivity.</p></body></html>]]></description>
</item>
<item>
<title>Managing Databases Remotely Using Navicat</title>
<link>https://www.navicat.com/company/aboutus/blog/1314-managing-databases-remotely-using-navicat.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Managing Databases Remotely Using Navicat </title></head><body><b>Apr 7, 2020</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 1: Connecting to a Remote Database Instance</h1><p>Remote work has been on the rise for some time now. Today, for those organizations still operating during the Covid-19 pandemic, it has become a necessity. Luckily, popular database systems (DBMS) have long supported remote connections.  Likewise, Navicat's database development and administration products are also well equipped to access databases remotely. In today's blog we'll learn how to establish a secure connection to a remote database instance using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 15</a>.</p><h1 class="blog-sub-title">Local vs. Remote Databases</h1><p>While it is possible for database and client software to be installed on the same computer, in practice, that tends to only be the case for local development purposes. In an organizational setting, the database usually resides on a server that may be part of the organizational infrastructure or in the Cloud. In either case, the mechanisms for connecting to the database are much the same.</p> <h1 class="blog-sub-title">TCP/IP</h1><p>TCP/IP is short for Transmission Control Protocol/Internet Protocol. It's really a suite of communication protocols used to interconnect network devices over the Internet. However, TCP/IP can also be employed as a communications protocol in a private LAN or WAN network. It's the easiest way to connect to a remote database, but offers the least security because unless the database and client(s) who interact with it both reside within an enclosed network, the data may be seen by anyone who cares to watch.</p><p>To establish a connection to the database, you must supply and endpoint. It can be the IP address of the database server or a domain such as acme.com. In some cases a port number will also be required. Here's the connection to an SQL Server instance running on Amazon AWS from Navicat for SQL Server:</p><img alt="general_tab (61K)" src="https://www.navicat.com/link/Blog/Image/2020/20200402/general_tab.jpg" height="667" width="562" /><p>In the case of TCP/IP connections, it is imperative that you utilize a secure user password.</p><h1 class="blog-sub-title">SSH Tunneling</h1><p>If you require a more secure connection, you can use SSH Tunneling. SSH stands for "Secure Shell server". It's called a tunnel because it allows you to "tunnel" a port between your local system and a remote server. Traffic is sent over the encrypted SSH connection, so it can't be monitored or modified in transit.</p><p>Here's the completed SSH screen in Navicat:</p><img alt="ssh_tab (44K)" src="https://www.navicat.com/link/Blog/Image/2020/20200402/ssh_tab.jpg" height="569" width="480" /><h1 class="blog-sub-title">Secure Sockets Layer(SSL)</h1><p>Another option for securing transmissions between the client and database is SSL. It's a protocol that was initially developed for transmitting private documents over the Internet. SSL works by binding the identities of entities such as websites and companies to cryptographic key pairs via digital documents known as X.509 certificates. Each key pair consists of a private key and a public key. The private key is kept secure, while the public key can be freely distributed via a certificate. Hence, before you can establish a secure connection, you must first install the <a class="default-links" href="https://www.openssl.org/" target="_blank">OpenSSL Library</a> and a certificate from a trusted authority.</p><p>To provide authentication details in Navicat, enable <i>Use authentication</i> and fill in the required information:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li><strong>Client Key File:</strong><br />The SSL key file in PEM format to use for establishing a secure connection.</li><li><strong>Client Certificate File:</strong><br />The SSL certificate file in PEM format to use for establishing a secure connection.</li><li><strong>CA Certificate File:</strong><br />The path to a file in PEM format that contains a list of trusted SSL certificate authorities.</li><li><strong>Specified Cipher:</strong><br />A list of permissible ciphers to use for SSL encryption.</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to establish a secure connection to a remote database using Navicat.  In part 2, we'll learn how Navicat Cloud allows you to collaborate with team members from across town to around the globe.</p><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 15</a> adds over 100 enhancements and includes several new features to give you more ways that ever to build, manage, and maintain your databases than ever before!</p></body></html>]]></description>
</item>
<item>
<title>Database Structure Synchronization using Navicat 15</title>
<link>https://www.navicat.com/company/aboutus/blog/1313-database-structure-synchronization-using-navicat-15.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Database Structure Synchronization using Navicat 15</title></head><body><b>Mar 11, 2020</b> by Robert Gravelle<br/><br/><p>Perform an Internet search for "database synchronization" and you're likely to receive a lot of information on synchronizing database data.  Meanwhile, instructions on synchronizing database schema structure is less prevalent. There is an inherent risk of destroying existing data that comes with altering the database structure. For that reason, you have to be extra careful when doing so.</p><p>Navicat can be a tremendous ally in synchronizing database structures. In today's blog, we'll learn how to use <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 15</a>'s Structure Synchronization wizard to update one database's schema structure to match that of another.</p><h1 class="blog-sub-title">About the Structure Synchronization Wizard</h1><p>The Structure Synchronization Wizard can be launched from the Tools menu.  You'll find the <i>Data transfer...</i> and <i>Data Synchronization...</i> commands there as well:</p><img alt="tools_menu (42K)" src="https://www.navicat.com/link/Blog/Image/2020/20200311/tools_menu.jpg" height="255" width="371" /><p>Navicat introduced a new mechanism for structure synchronization back in version 12. It provides an easier and more intuitive way to visually compare and identify the differences between two databases. And it shows side-by-side Data Definition Language (DDL) comparison that makes it easy to locate all the object differences. You can then choose and reorder your synchronization scripts to update the destination database.</p><p>It should be noted that <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1251-introducing-navicat-data-modeler-3-0" target="_blank">Navicat Data Modeler 3.0</a> also supports Structure Synchronization. It helps you to discover and capture changes made in the model and then apply them to a targeted schema. </p><h1 class="blog-sub-title">Minimizing the Risk of Data Loss</h1><p>Altering the structure of a database that already contains data is fraught with danger.  Therefore, you should always backup your data before attempting to synchronization database structures.  This can be easily accomplished using Navicat's Backup utility.  You'll find it on the main button bar:</p><img alt="backup_button (131K)" src="https://www.navicat.com/link/Blog/Image/2020/20200311/backup_button.jpg" style="max-width: 800px; height: auto;" /><p>You can backup many types of database entities, including tables, views, functions/stored procedures, and events:</p><img alt="backup_object_selection (63K)" src="https://www.navicat.com/link/Blog/Image/2020/20200311/backup_object_selection.jpg" height="472" width="602" /><h1 class="blog-sub-title">Structure Synchronization Steps</h1><p>The wizard guides you through the each step of the synchronization process via several screens as follows:</p><h3>Setting the Source and Destination Databases</h3><p>The first screen sets the connection and database details where the target database structure will be compared to that of the source: </p><img alt="source_dest_screen (72K)" src="https://www.navicat.com/link/Blog/Image/2020/20200311/source_dest_screen.jpg" height="681" width="665" /><h3>Structure Comparison</h3><p>The Structure Comparison screen is where you can compare and identify the differences between two databases.  You can group items by Operation or Object Type: </p><img alt="operation_screen (130K)" src="https://www.navicat.com/link/Blog/Image/2020/20200311/operation_screen.jpg" height="678" width="602" /><h3>Deploying the Script</h3><p>The third and final screen shows the generated deployment script:</p><img alt="deployment_script_screen (158K)" src="https://www.navicat.com/link/Blog/Image/2020/20200311/deployment_script_screen.jpg" height="681" width="665" /><p>To run the script, click the Execute button at the bottom of the screen. Results will be displayed on the Message Log tab of the same screen:</p><img alt="message_log (116K)" src="https://www.navicat.com/link/Blog/Image/2020/20200311/message_log.jpg" height="681" width="665" /><p>You can save the current Synchronization profile for later use or load an existing profile at any time.</p><p>There is also a Back button, should you wish to recompare database structures.</p><h1 class="blog-sub-title">Conclusion</h1><p> In today's blog, we learned about the inherent risk of destroying existing data that comes with altering the database structure and how to minimize it using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 15</a>'s Structure Synchronization wizard to update one database's schema structure to match that of another.</p><p>Interested in finding out more about Navicat Premium 15? You can try it for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>The NULL Value and its Purpose in Relational Database Systems</title>
<link>https://www.navicat.com/company/aboutus/blog/1312-the-null-value-and-its-purpose-in-relational-database-systems.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>The NULL Value and its Purpose in Relational Database Systems</title></head><body><b>Mar 3, 2020</b> by Robert Gravelle<br/><br/><p>In databases, the NULL value is one that has a very particular meaning. Thus, it is important to understand that a NULL value is different than a zero value or a field that contains spaces. In today's blog, we'll explore what the NULL value means and how to work with NULLs in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>.</p><h1 class="blog-sub-title">What is NULL?</h1><p>It should be noted that the NULL value is not unique to databases. It's found in most computer programming languages as well, where it evolved as a built-in constant that had a value of zero. In fact, it's the character 0 that's still employed to terminate strings in C. However, over time, it came to mean "nothing". Specifically, NULL is a pointer to a special bit pattern that represents a NULL. This is what is commonly referred to as a "null pointer".</p><p>In a database, zero is a value which has meaning, so the value NULL became is a special marker to mean that no value exists. In that sense, NULL does not refer to a memory location, as it does for programming languages.  In a database, the NULL value indicates a lack of a value, which is not the same thing as a value of zero. To illustrate, consider the question "How many CDs does Rob own?" The answer may be "zero" (0) or NULL. In the latter case, the NULL value could mean that we do not know how many CDs Rob owns. Later, the value might be updated with a numeric vale once we have ascertained how many CDs Rob owns. </p><h1 class="blog-sub-title">Allowing NULLs as a Column Value</h1><p>Any column that is part of a KEY must not allow NULLs, but for other fields, it's completely up to you. Including the NOT NULL clause next to your column definitions at table creation time will force the user to include a value for that column in INSERT and UPDATE operations.  Otherwise, an error will be thrown, and the operation will fail.  <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s Table Design screen includes a <i>Not null</i> column to designate a field as nullable or non-nullable.  Here it is in Navicat 15's new Dark Mode:</p><img alt="Not_Null_column (64K)" src="https://www.navicat.com/link/Blog/Image/2020/20200226/Not_Null_column.jpg" height="291" width="771" /><p>On new fields, the <i>Not null</i> checkbox is automatically deselected.</p><h1 class="blog-sub-title">Referencing NULL Values in Queries</h1><p>When writing queries, there are times that you'll want to filter out rows with NULL values.  Other times, you'll specifically want to retrieve rows that contain NULLs. Here's how to do both:</p><p>Queries can filter out nulls using the IS NOT NULL clause. Here's an example that excludes films from the Sakila database that contain a NULL original_language_id:</p><img alt="Not_Null_query (75K)" src="https://www.navicat.com/link/Blog/Image/2020/20200226/Not_Null_query.jpg" height="341" width="752" /><p>Likewise, you can find out which rows contain a NULL value by dropping the NOT from the NOT NULL clause.  Now, only those rows with a NULL original_language_id are returned:</p><img alt="null_query (100K)" src="https://www.navicat.com/link/Blog/Image/2020/20200226/null_query.jpg" height="413" width="775" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we explored what the NULL value means and how to work with this special value in relational databases.</p><p>Interested in finding out more about <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try both for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>Choosing Between VARCHAR and TEXT in MySQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1308-choosing-between-varchar-and-text-in-mysql.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Choosing Between VARCHAR and TEXT in MySQL</title><meta name="author" content="Navicat" /><meta name="description" content="One of the changes in MySQL version 5.0.3 included an increase to the maximum length of VARCHAR fields from 255 to 65,535 characters. That made the VARCHAR type more similar to TEXT than ever before. For those of us who design database tables, choosing between VARCHAR and TEXT now became more challenging as a result. In today's blog, we'll outline the key differences between the two and layout the factors to consider when deciding which data type to go with." /><meta name="keywords" content="Navicat, Navicat GUI, Navicat Premium, database, database design, database management, SQL, MySQL, database toold"/><meta name="robots" content="index, follow" /></head><body><b>Feb 19, 2020</b> by Robert Gravelle<br/><br/><p>One of the changes in MySQL version 5.0.3 included an increase to the maximum length of VARCHAR fields from 255 to 65,535 characters. That made the VARCHAR type more similar to TEXT than ever before. For those of us who design database tables, choosing between VARCHAR and TEXT now became more challenging as a result. In today's blog, we'll outline the key differences between the two and layout the factors to consider when deciding which data type to go with. </p><h1 class="blog-sub-title">Some Differences Between VARCHAR and TEXT</h1><p>While both data types share a maximum length of 65,535 characters, there are still a few differences:</p><ul>    <li>The VAR in VARCHAR means that you can set the max size to anything between 1 and 65,535. TEXT fields have a fixed max size of 65,535 characters.</li>    <li>A VARCHAR can be part of an index whereas a TEXT field requires you to specify a prefix length, which can be part of an index.</li>    <li>VARCHAR is stored inline with the table (at least for the MyISAM storage engine), making it potentially faster when the size is reasonable. Of course, how much faster depends on both your data and your hardware. Meanwhile, TEXT is stored off table with the table having a pointer to the location of the actual storage.</li>    <li>Using a TEXT column in a sort will require the use of a disk-based temporary table, as the MEMORY (HEAP) storage engine.</li></ul>    <p></p><h1 class="blog-sub-title">TEXT Types</h1><p>Should you require the TEXT type, know that there are actually three flavors; in addition to TEXT, there are also MEDIUMTEXT or LONGTEXT varieties.  The latter two are for storing textual content that is longer than 65,535 characters. MEDIUMTEXT stores strings up to 16 MB, and LONGTEXT up to 4 GB!  It should go without saying that you should avoid using these larger types unless you have <strong>a lot</strong> of storage space.</p><h1 class="blog-sub-title">Selecting VARCHAR and TEXT Types in Navicat</h1><p>In both <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> and <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>, the Object Designer allows you to create and maintain all sorts of database objects, including Tables, Views, Functions, Indexes, and, of course, columns. Under the Type header, you can select a column's data type simply by selecting it from a drop-down. As you can see, it contains the text, mediumtext, and longtext types:</p><img alt="types_dropdown (4K)" src="https://www.navicat.com/link/Blog/Image/2020/20200219/types_dropdown.png" height="143" width="439" /><p>As for the VARCHAR type, you can also select it from the Type drop-down, but then you should edit the Length value if you want a value other than 255 (the default).</p><img alt="table_designer (26K)" src="https://www.navicat.com/link/Blog/Image/2020/20200219/table_designer.png" height="462" width="781" /><p><i>TIP: Since TEXT fields can get quite long, Navicat has a FORM view that give them more room:</i></p><img alt="form_view (15K)" src="https://www.navicat.com/link/Blog/Image/2020/20200219/form_view.png" height="495" width="623" /><h1 class="blog-sub-title">Conclusion</h1><p>The take-away we can draw from all of this is that one should use a VARCHAR field instead of TEXT for columns between 255 and 65k characters if possible. That will lead to potentially less disk reads and less writes.</p><p><p>Interested in finding out more about <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql">Navicat for MySQL</a> or <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>? You can try both for 14 days completely free of charge for evaluation purposes!</p></p></body></html>]]></description>
</item>
<item>
<title>Eliminating Repeating Groups In Your Database Tables</title>
<link>https://www.navicat.com/company/aboutus/blog/1307-eliminating-repeating-groups-in-your-database-tables.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Eliminating Repeating Groups In Your Database Tables</title><meta name="author" content="Navicat" /><meta name="description" content="A repeating group is a series of fields/attributes that are repeated throughout a database table. It is a common problem faced by organizations both large and small, one that can have several ramifications. The problem of repeating groups can become a nightmare to deal with. In today's blog, we'll learn how to identify repeating groups both during design time and in existing databases, as well as how to fix them. Since repeating groups are a phenomenon that can affect any relational database, we'll use Navicat Premium as our database development tool." /><meta name="keywords" content="Navicat, Navicat GUI, Navicat Premium, database, database design, database management, SQL, MySQL, database tool, Maria DB, Oracle"/><meta name="robots" content="index, follow" /></head><body><b>Feb 13, 2020</b> by Robert Gravelle<br/><br/><p>A repeating group is a series of fields/attributes that are repeated throughout a database table. It is a common problem faced by organizations both large and small, one that can have several ramifications. For example, the same set of information being present in different areas can cause data redundancy and data inconsistency. Moreover, all of this repeating data can eat up a lot of valuable disk space and take a long of time to search through. The problem of repeating groups can be manageable in small organizations, but for larger organizations, whom must manage huge volumes of information, repeating groups can become a nightmare to deal with. </p><p>In today's blog, we'll learn how to identify repeating groups both during design time and in existing databases, as well as how to fix them.  Since repeating groups are a phenomenon that can affect any relational database, we'll use <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> as our database development tool.</p><h1 class="blog-sub-title">An Example of a Repeating Group</h1><p>The <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila sample database</a> contains a number of database entities relating to a fictional video rental store. Although its tables have been normalized to Third Normal form (3NF), for the purposes of this tutorial, we'll consider that the film table contains data about actors who appear in each film.  Here is a sampling of rows from that table:</p><img alt="film_and_actors_repeating_groups (47K)" src="https://www.navicat.com/link/Blog/Image/2020/20200212/film_and_actors_repeating_groups.jpg" height="191" width="566" /><p>You can see that each actor is adding an extra row to the table.  Worse still, actors' names are repeated every time that they come up.  The problem is that an actor is a separate and distinct entity from a film.  Hence, they need to go.</p><h1 class="blog-sub-title">Fixing Repeating Groups</h1><p>Even though repeating groups are not, strictly speaking, a violation of first normal form (1NF), the process of converting your data from Un-Normalized Form (UNF) to 1NF will eliminate repeating groups. Here are the steps for doing that:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Identify the repeating groups of data.</li><li>Remove the the repeating group fields to a new table, leaving a copy of the primary key with the table that is left.</li><li>The original primary key will not now be unique so assign a new primary key to the relation using the original primary key as part of a composite key.</li></ul><p>Since we've already identified the repeating groups, let's re-design the table so that repeating group fields are omitted and given their own table.</p><p>Navicat Premium comes with a built-in <a class="default-links" href="https://www.navicat.com/en/products/navicat-data-modeler" target="_blank">Data Modeler</a>. It helps you visually design high-quality conceptual, logical and physical data models. From there, you can generate database structures from a model. The Data Modeler also works in reverse, performing reverse engineering from existing databases. Other features include import from ODBC data sources, generate complex SQL/DDL, and print models to files.</p><p>Here is a model showing the existing films_and_actors table:</p><img alt="film_and_actors_model (67K)" src="https://www.navicat.com/link/Blog/Image/2020/20200212/film_and_actors_model.jpg" height="487" width="512" /><p>To separate actors from films, we need to add a new table to host the actor attributes.  We should also give it an ID PK field that will link to the same (new FK) field in the original table.</p><p>You'll also want to rename tables to reflect that the films table only contains films and actors only stores actor information.</p><h3>Linking the films and actors Tables</h3><p>How you link the tables together will depend on their particular relationship to each other.  In this case, a film may have zero or more actors, and actors may appear in one or more films.  Such a many-to-many relationship will require an intermediary table to link films and actors.  It will contain only film and actor IDs.  Here is the completed model in the Navicat Modeler:</p><img alt="film_actors_many_to_many_model (104K)" src="https://www.navicat.com/link/Blog/Image/2020/20200212/film_actors_many_to_many_model.jpg" height="591" width="587" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how to identify repeating groups both during design time and in existing databases, as well as how to fix them, using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s powerful Data Modeler. Navicat Premium adds over 100 enhancements and includes several new features to give you more ways that ever to build, manage, and maintain your databases than ever before!</p></body></html>]]></description>
</item>
<item>
<title>Listing Records Based On Averages</title>
<link>https://www.navicat.com/company/aboutus/blog/1306-listing-records-based-on-averages.html</link>
<description><![CDATA[<html><head><title>Listing Records Based On Averages</title><meta name="author" content="Navicat" /><meta name="description" content="ANSI SQL includes several aggregate functions, which allow you to perform a calculation on a set of values to return their result as a single value. By default, aggregate functions apply to all rows, but you can narrow down the field by applying a WHERE clause to the SELECT statement. In today's blog we'll apply these techniques on the AVG() function, but they will work equally well with all aggregate functions." /><meta name="keywords" content="Navicat, Navicat GUI, Navicat Premium, database, database design, database management, SQL, MySQL, database tool, Maria DB, Oracle"/><meta name="robots" content="index, follow" /></head><body><b>Feb 5, 2020</b> by Robert Gravelle<br/><br/><p>ANSI SQL includes several aggregate functions, which allow you to perform a calculation on a set of values to return their result as a single value. These include Count(), Min(), Max(), Sum() and AVG(), and others. By default, aggregate functions apply to all rows, but you can narrow down the field by applying a WHERE clause to the SELECT statement. Moreover, you can conditionally select certain rows using a few more techniques that we'll explore here today using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>. These include the use of a CASE statement as well as the GROUP BY clause.  We'll apply these techniques on the AVG() function, but they will work equally well with all aggregate functions.</p><h1 class="blog-sub-title">Using the AVG() Function</h1><p>The AVG() retrieves the average value of a given expression. If the function does not find a matching row, it returns NULL. We'll run our queries against the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila sample database</a>.  It was originally developed for MySQL, but has since been ported to most popular DBMSes. <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> is the ideal database client to use here because it supports everything from MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, to SQLite. Moreover, it's compatible with cloud databases like Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud and MongoDB Atlas as well.</p><p>The film table stores information about individual films for the fictional Sakila video rental store.  Columns include title, description, running time, rental cost, rating, and others. </p><img alt="film_columns (154K)"src="https://www.navicat.com/link/Blog/Image/2020/20200205/film_columns.jpg" height="438" width="717" /><p>We can use the AVG() function to determine the average rental cost for ALL films as follows:</p><img alt="auto_complete (46K)"src="https://www.navicat.com/link/Blog/Image/2020/20200205/auto_complete.jpg" height="339" width="512" /><h1 class="blog-sub-title">Using the CASE Statement</h1><p>The AVG() function accepts an expression.  Hence, it can be a column name, but it can be any valid expression.  Therefore, we can apply the AVG() function conditionally by passing a CASE statement to the AVG() function as a parameter. We could determine the average rental_rate for only those films that have a PG rating using a CASE statement like so:</p><img alt="pg_avg (61K)"src="https://www.navicat.com/link/Blog/Image/2020/20200205/pg_avg.jpg" height="306" width="602" /><p>The above query shows the total number of films, films that do not have a PG rating, and the average rental rate for all films as well as those with a PG rating. The CONCAT() and FORMAT() functions are employed to display the rental_rate as currency.</p><h1 class="blog-sub-title">Using the GROUP BY Clause</h1><p>Another way to apply AVG() to only certain rows is to use GROUP BY. It aggregates the results on the basis of selected column. Hence, grouping results by the rating will list the average rental_rate for each rating:</p><img alt="group_by (55K)"src="https://www.navicat.com/link/Blog/Image/2020/20200205/group_by.jpg" height="381" width="585" /><p>We could narrow down the rows selected further by using a WHERE and/or HAVING clause(s). Both may be employed separately or in tandem. For instance, the next query selects films with a language_id of 1 (English) whose count by rating total less than 200:</p><img alt="group_by_with_where_and_having (60K)"src="https://www.navicat.com/link/Blog/Image/2020/20200205/group_by_with_where_and_having.jpg" height="404" width="603" /><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we employed the CASE statement and GROUP BY clause to conditionally list film records based on averages. </p><p>Queries were executed in <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/1300-navicat-premium-15-the-most-powerful-yet-2.html"target="_blank">Navicat Premium 15</a>.  It adds over 100 enhancements and includes several new features to give you more ways that ever to build, manage, and maintain your databases than ever before!</p></body></html>]]></description>
</item>
<item>
<title>Selecting All But One Column In MySQL</title>
<link>https://www.navicat.com/company/aboutus/blog/1304-selecting-all-but-one-column-in-mysql.html</link>
<description><![CDATA[<html><head><title>Selecting All But One Column In MySQL</title><meta name="author" content="Navicat" /><meta name="description" content="SQL makes selecting all fields in a table quite trivial via the SELECT * (SELECT ALL) clause. Unfortunately, as soon as you omit a column from the list, the SELECT ALL statement goes out the window. What if we could select every column but one - selecting by exclusion rather than inclusion? It can be done. These will be the focus of today's blog." /><meta name="keywords" content="Navicat, Navicat GUI, Navicat Premium, database, database design, database management, SQL, MySQL, database tool, Maria DB, Oracle"/><meta name="robots" content="index, follow" /></head><body><b>Jan 23, 2020</b> by Robert Gravelle<br/><br/>  <p>SQL makes selecting all fields in a table quite trivial via the <i>SELECT *</i> (SELECT ALL) clause.  Unfortunately, as soon as you omit a column from the list, the SELECT ALL statement goes out the window. Writing out every every column name can quickly become tedious, especially if you happen to be dealing with tables that contain dozens of columns. What if we could select every column but one - selecting by exclusion rather than inclusion? It can be done.  In fact there are a couple of ways to do it - one simple, the other, a bit less so. These will be the focus of today's blog.</p><h1 class="blog-sub-title">Method 1: Using The INFORMATION_SCHEMA.COLUMNS table</h1><p>The INFORMATION_SCHEMA provides access to database metadata, information about the MySQL server such as the name of a database or table, the data type of a column, or access privileges. More specifically, the COLUMNS table provides information about columns in tables, including column names. </p><p>The <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila sample database</a>'s film table contains the highest number of columns at thirteen.</p><img alt="film_columns (50K)" src="https://www.navicat.com/link/Blog/Image/2020/20200123/film_columns.jpg" height="484" width="394" /><p>Here's how we would use the INFORMATION_SCHEMA.COLUMNS table to fetch all but the <i>original_language_id</i> column:</p>  <p></p><img alt="column_selection (57K)" src="https://www.navicat.com/link/Blog/Image/2020/20200123/column_selection.jpg" height="237" width="737" /><p>The GROUP_CONCAT function concatenates all of the column names into a single, comma-delimited string.  We can then replace the field to omit with an empty string!</p><h3 class="blog-sub-title">Executing the Query</h3><p>One small hurdle to overcome is that a MySQL query cannot accept dynamic column names.  The solution is to employ a Prepared Statement.  Here's the code that sets the @sql variable, prepares the statement, and executes it:</p><pre>SET @sql = CONCAT('SELECT ',                  (SELECT REPLACE(GROUP_CONCAT(COLUMN_NAME), '&lt;columns_to_omit&gt;,', '')                   FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '&lt;table&gt;'                   AND TABLE_SCHEMA = '&lt;database&gt;'),                   ' FROM &lt;table&gt;');PREPARE stmt1 FROM @sql;EXECUTE stmt1;</pre><p>Inserting the column, table, and schema information into the query yields the results that we're after:</p><img alt="query_results" src="https://www.navicat.com/link/Blog/Image/2020/20200123/query_results.jpg" height="629" width="798" /><h1 class="blog-sub-title">Method 2: Using Navicat</h1><p>The main goal of database development and administration tools like <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql"target="_blank">Navicat</a> is to increase productivity. As such, Navicat is designed to make your job as quick and easy as possible. To that end, the SQL Editor helps you to code faster thanks to Code Completion and customizable Code Snippets that offer suggestions for keywords and strip the repetition from coding. And if that wasn't enough, Navicat also provides a useful tool called Query Builder for building queries visually. It allows you to create and edit queries with only a cursory knowledge of SQL. While the Query Builder is marketed predominantly to more novice coders, those more proficient in SQL can still benefit from the Query Builder for certain tasks.  One such tasks is that of choosing columns. </p><p>In the Query Builder, there is a checkbox next to the table name to select all of its columns. If we click on it, we can then simply uncheck the original_language_id field to remove it from the column list:</p><img alt="query_builder" src="https://www.navicat.com/link/Blog/Image/2020/20200123/query_builder.jpg" height="732" width="581" /><p>Clicking the OK button then closes the dialog and adds the SQL code to the editor:</p><img alt="code_in editor" src="https://www.navicat.com/link/Blog/Image/2020/20200123/code_in%20editor.jpg" height="329" width="384" /><p>Creating queries using the Query Builder offers a few advantages over writing code by hand:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li> it minimizes typos </li> <li> it generates formatted SQL that's easy to read</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned a couple of techniques to select every column in a table but one or two. </p>  <p>Interested in finidng out more about <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql"target="_blank">Navicat for MySQL</a>? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-mysql"target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p></body></html>]]></description>
</item>
<item>
<title>How to Tell when it's Time to Rebuild Indexes in Oracle</title>
<link>https://www.navicat.com/company/aboutus/blog/1303-how-to-tell-when-it-s-time-to-rebuild-indexes-in-oracle.html</link>
<description><![CDATA[<title>How to Tell when it's Time to Rebuild Indexes in Oracle</title><head><meta name="author" content="Navicat" /><meta name="description" content="Every so often, we need to rebuild indexes in Oracle, because indexes become fragmented over time. This causes the performance of your database queries degrade. Hence, rebuilding indexes every now and again can be quite beneficial. In today's blog, we'll learn how often to build indexes and how to determine when an index needs to be rebuilt." /><meta name="keywords" content="Navicat, Navicat GUI, Navicat Premium, database, database design, database management, Oracle, Rebuild Indxs in Oracle, database tool"/><meta name="robots" content="index, follow" /></head><html><body><b>Jan 15, 2020</b> by Robert Gravelle<br/><br/><p>Every so often, we need to rebuild indexes in Oracle, because indexes become fragmented over time.  This causes their performance - and by extension - that of your database queries, to degrade. Hence, rebuilding indexes every now and again can be quite beneficial. Having said that, indexes should not be rebuilt to often, because it's a resource intensive task. Worse, as an index is being rebuilt, locks will be placed on the index, preventing anyone from accessing it while the rebuilding occurs. Any queries trying to access this index in order to return the required results will be temporarily blocked, until the rebuild is complete. </p><p>In today's blog, we'll learn how often to build indexes and how to determine when an index needs to be rebuilt.</p><h1 class="blog-sub-title">How Often to Rebuild Indexes</h1><p>As mentioned in the introduction, rebuilding indexes is both a resource intensive and blocking task. Both these considerations make it ideal as an offline activity, to be run when as few users as possible are accessing a database. In general, this means during a scheduled maintenance window.</p><p>It is not really feasible to devise a catch-all plan with regard to when and how often to rebuild indexes. These decisions are highly dependent on the type of data you work with, as well as the indexes and queries that are utilized. With that in mind, here are a few guidelines regarding when to rebuild indexes:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;">    <li><strong>Rebuilding Indexes Nightly</strong>    <p>If your indexes fragment rapidly, and you have a nightly maintenance window that allows you to run the Rebuild Index task, in addition to all your other maintenance tasks, then, by all means, go ahead.</p></li>    <li><strong>Weekly, at minimum</strong>    <p>If you can't rebuild indexes on a nightly basis, then, it should be done once a week at a minimum. If you wait much longer than a week, you risk hurting your SQL Server's performance due to the negative impact of wasted empty space and logical fragmentation.</p></li>    <li><strong>Alternative scheduling</strong>    <p>If you don't have a maintenance window accommodate this task at least once a week, then you need to pay close attention to how your indexes are faring.</p></li></ul><h1 class="blog-sub-title">Determining if an Index Needs to Be Rebuilt</h1><p>In Oracle, you can get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE command. Here's some sample output from the INDEX_STATS Table:</p><pre>  SQL> ANALYZE INDEX IDX_GAM_ACCT VALIDATE STRUCTURE;  Statement processed.     SQL> SELECT name, height,lf_rows,lf_blks,del_lf_rows FROM INDEX_STATS;     NAME          HEIGHT      LF_ROWS    LF_BLKS    DEL_LF_ROW  ------------- ----------- ---------- ---------- ----------  DX_GAM_ACCT        2          1          3          6      1 row selected. </pre>           <p>There are two rules of thumb to help determine if the index needs to be rebuilt:</p><ol><li>If the index has height greater than four, rebuild the index.</li><li>The deleted leaf rows should be less than 20%.</li></ol><h1 class="blog-sub-title">Rebuilding an Index</h1><p>In Oracle, you can use the Alter Index Rebuild command to rebuild indexes. It rebuilds a spatial index or a specified partition of a partitioned index.</p><p>ALTER INDEX REBUILD command has a few forms:</p><pre>ALTER INDEX [schema.]index REBUILD       [PARAMETERS ('rebuild_params [physical_storage_params]' ) ]      [{ NOPARALLEL | PARALLEL [ integer ] }] ;</pre><p>OR</p><pre>ALTER INDEX [schema.]index REBUILD ONLINE      [PARAMETERS ('rebuild_params [physical_storage_params]' ) ]      [{ NOPARALLEL | PARALLEL [ integer ] }] ;</pre><p>OR</p><pre>ALTER INDEX [schema.]index REBUILD PARTITION partition       [PARAMETERS ('rebuild_params [physical_storage_params]' ) ];</pre><h1 class="blog-sub-title">Handling Unusable Indexes</h1><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-for-oracle">Navicat for Oracle</a>'s Maintain Index facilities has a couple of useful options for handling unusable indexes:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li><strong>Rebuild</strong><p>To re-create an existing index or one of its partitions or subpartitions. If the index is marked unusable, then a successful rebuild will mark it usable.</p></li><li><strong>Make Unusable</strong><p>To make the index unusable. An unusable index must be rebuilt, or dropped and re-created, before it can be used.</p></li></ul><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how often to build indexes and how to determine when an index needs to be rebuilt.</p><p>If you'd like to learn more about Navicat for Oracle, visit the <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-oracle" target="_blank">product page</a>.</p></body></html>]]></description>
</item>
<item>
<title>Storing Images in MySQL with Navicat</title>
<link>https://www.navicat.com/company/aboutus/blog/1301-storing-images-in-mysql-with-navicat.html</link>
<description><![CDATA[<!DOCTYPE HTML><title>Storing Images in MySQL with Navicat</title><head><meta name="author" content="Navicat" /><meta name="description" content="Navicat development and administration tools provide excellent support for image management. In today's blog, we'll learn how Navicat makes storing images a simple process. For the purposes of demonstration, I'll be using Navicat Premium against a MySQL 8 database, but the same procedure would apply to other relational databases as well." /><meta name="keywords" content="Navicat, Navicat GUI, Navicat Premium, database, database design, database management, SQL, MySQL, database tool, Maria DB, Oracle"/><meta name="robots" content="index, follow" /></head><body><b>Jan 08, 2020</b> by Robert Gravelle<br/><br/><p>The number of images in web applications has been growing steadily in recent years. There is also a need to distinguish between images of different sizes, like thumbnails, web display images, and the like. For example, one application that I recently developed shows news items where each item has a thumbnail and main article image. Another app shows company logos in small and large sizes. </p><p>Most of the time, images can be stored on the web server and then referenced using the URL.  That only requires storing the path string in the database, rather than the image itself.  However, there are times that this is not feasible, such as where the app has insufficient rights on the filesystem.  In those cases, you can store images directly in the database and then load them using application code.</p><p>Navicat development and administration tools provide excellent support for image management. In today's blog, we'll learn how Navicat makes storing images a simple process. For the purposes of demonstration, I'll be using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> against a MySQL 8 database, but the same procedure would apply to other relational databases as well.</p><h1 class="blog-sub-title">Designing the Table</h1><p>In MySQL, the preferred data type for image storage is BLOB.  However, there are actually three flavors of BLOB.  The one you choose depends on the size of the images that you will be storing.  If in doubt, go to the larger capacity BLOB! Here are the three BLOB types:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>BLOB: Can handle up to 65,535 bytes of data. </li><li>MEDIUMBLOB: The maximum length supported is 16,777,215 bytes. </li><li>LONGBLOB: Stores up to 4,294,967,295 bytes of data.</li></ul><p>With that in mind, here's a table definition that would be well suited to thumbnail images, but not much larger:</p><img alt="table_def (39K)" src="https://www.navicat.com/link/Blog/Image/2020/20200108/table_def.jpg" height="173" width="613" /><p>Besides the image itself, you may find it useful to store other information about the image, such as an ID, name, description, size, type (JPEG, GIF, BITMAP, etc.), category, and so on.</p><h1 class="blog-sub-title">Loading Images into the <i>images</i> Table</h1><p>Using Navicat, there's no need to write SQL code to load images. Instead, you can use the standard File Browser to locate and insert image files.</p><p>Whenever you view table contents in either Grid of Form view, you can select how you want Navicat to treat data from the <i>data type</i> drop-down:</p><img alt="data_type_dropdown (13K)" src="https://www.navicat.com/link/Blog/Image/2020/20200108/data_type_dropdown.jpg" height="130" width="259" /><p>Choosing <i>Image</i> from the drop-down adds an image preview pane underneath the table/row contents:</p><img alt="open_file_icon (29K)" src="https://www.navicat.com/link/Blog/Image/2020/20200108/open_file_icon.jpg" height="393" width="476" /><p>On the left of the file preview, you'll find three icons: Load, Save to Disk, and Clear.  To load an image, simply click the <i>Load</i> icon and select the image using the operating system's standard File Browser dialog. Once inserted, the image - as well as its size in bytes - will appear in the preview pane: </p><img alt="image_preview (74K)" src="https://www.navicat.com/link/Blog/Image/2020/20200108/image_preview.jpg" height="507" width="465" /><p><em>Note that the above image requires a MEDIUMBLOB as its size exceeds 65,535 bytes!</em></p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to store images in a MySQL 8 database using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 15</a>.</p><p>Now is the perfect time to purchase Navicat Premium as <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/1300-navicat-premium-15-the-most-powerful-yet-2.html" target="_blank">version 15</a> adds over 100 enhancements and includes several new features to give you more ways that ever to build, manage, and maintain your databases than ever before!</p></body></html>]]></description>
</item>
<item>
<title>Navicat Premium 15 - the Most Powerful Yet!</title>
<link>https://www.navicat.com/company/aboutus/blog/1300-navicat-premium-15-the-most-powerful-yet-2.html</link>
<description><![CDATA[<html>  <title>Navicat Premium 15 - the Most Powerful Yet! | Navicat Blog</title><head><meta name="author" content="Navicat" /><meta name="description" content=">Perhaps you've heard that version 15 of Navicat's flagship product, Navicat Premium, was officially released on November 25th.  It comes packed with numerous improvements and features to address all of your database development and administration needs. In this blog, we'll be taking a look at other improvements, including Data Transfer, Query Builder, Data Modeler and more!" /><meta name="keywords" content="Navicat, Navicat GUI, Navicat Premium, database, database design, database management, SQL, MySQL, database tool, Maria DB, Oracle"/><meta name="robots" content="index, follow" /></head><body><b>Dec 30, 2019</b> by Robert Gravelle<br/><br/><p>Perhaps you've heard that version 15 of Navicat's flagship product, <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank" >Navicat Premium</a>, was officially released on November 25th.  It comes packed with numerous improvements and features to address all of your database development and administration needs. In addition to over 100 enhancements, Navicat includes several new features to give you more ways that ever to build, manage, and maintain your databases.  In the last blog, we explored the Data Visualization feature. Today, we'll be taking a look at other improvements, including Data Transfer, Query Builder, Data Modeler and more!</p><h1 class="blog-sub-title">Data Transfer</h1><p>Although Navicat already supports the transfer of database objects from one database and/or schema to another, or to an sql file, version 15 brings a whole new experience along with a number of new functions to the Data Transfer utility. The new design includes an intuitive interface for customizing the fields and specifying the number of rows you wish to transfer. For instance, you can choose specific fields to transfer and even change the field names.  You can also limit the number of rows to transfer using custom filters.</p><tr><td align="middle"><img alt="Navicat Premium 15 - Data Transfer" src="https://www.navicat.com/link/Blog/Image/2019/20191217/data_transfer.jpg" style="max-width: 100%;"></td></tr><p>Perhaps even more importantly, Navicat 15's Data Transfer is designed to quickly migrate massive amounts of data - quick enough to accomplish the most complex transfers faster than ever before.</p><h1 class="blog-sub-title">Query Builder</h1><p>Navicat 15 introduces a whole new approach to writing SQL via the new Query Builder. Whereas version 12 showed all of the query syntax in one statement, version 15 breaks it down into clauses: Select, From, Where, Group By, Having, Order By, and Limit.  Moreover, the resulting SQL statement is displayed in the right pane so that you can make modifications, should you make any syntax errors.</p><tr><td align="middle"><img alt="Navicat Premium 15 - Query Builder" src="https://www.navicat.com/link/Blog/Image/2019/20191217/query_builder.jpg" style="max-width: 100%;" /></td></tr><p>In addition to the new UI, Navicat 15 now supports subqueries to further fine-tune your query results.</p><h1 class="blog-sub-title">Data Modeling</h1><p>Navicat 15 includes the new and improved <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1251-introducing-navicat-data-modeler-3-0" target="_blank">Data Modeler 3.0</a>. The updated Data Modeler introduces a new mechanism for Database Synchronization, starting with a more intuitive way to visually compare and identify the differences between the model and database. It shows a side-by-side DDL comparison that makes locating all the object differences a snap. </p><tr><td align="middle"><img alt="Navicat Premium 15 - Data Modeling" src="https://www.navicat.com/link/Blog/Image/2019/20190910/updates_to_synch.jpg" style="max-width: 100%;" /></td></tr><p>Once you're ready, you can choose and reorder your synchronization scripts to update the destination database.</p><h1 class="blog-sub-title">Dark Mode</h1><p>A lot of people find that bright themes, including the default white of Windows, make their eyes tired after many hours in front of the screen.  For that reason, dark themes have become increasing prevalent in recent years.  Navicat 15 gives you the choice between the traditional Windows (light) theme and the new Dark Theme. In contrast to the mostly whites of Windows themes, dark theme displays dark surfaces across the majority of the UI.</p><tr><td align="middle"><img alt="Navicat Premium 15 - Dark Mode" src="https://www.navicat.com/link/Blog/Image/2019/20191217/dark_mode.jpg" style="max-width: 100%;" /></td></tr><p>You'll find it on the General options screen.</p><h1 class="blog-sub-title">Native Linux support</h1><p>Whereas previous versions of Navicat Modeler required Wine to run on Linux systems, version 15 is a true Linux application, so users can now enjoy a UI that is more in line with other Linux apps!</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we got a preview of Navicat Premium 15's most top new features and improvements.  Why not download it and try it out for yourself?  <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Here</a>'s a free 14 day trial!</p></body> </html>]]></description>
</item>
<item>
<title>Perform Full-text Searches in MySQL (Part 3)</title>
<link>https://www.navicat.com/company/aboutus/blog/1299-perform-full-text-searches-in-mysql-part-3.html</link>
<description><![CDATA[<!DOCTYPE HTML><html><head><title>Perform Full-text Searches in MySQL (Part 3)| Navicat Blog</title><meta name="author" content="Navicat" /><meta name="description" content="Following Pat 2 of the series on full-text indexing and searching in MySQL, let's see how to do Boolean Full-Text searhes on MySQL." /><meta name="keywords" content="Navicat, Navicat GUI, Navicat Premium, database, database design, database management, SQL, MySQL, database tool, Maria DB, Oracle"/><meta name="robots" content="index, follow" /></head><body><b>Dec 19, 2019</b> by Robert Gravelle<br/><br/><p>Welcome to part 3 of this series on full-text indexing and searching in MySQL. In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1261-perform-full-text-searches-in-mysql-part-1.html" target="_blank">Part 1</a>, we saw how MySQL provides full-text search capability via FULLTEXT indexing along with the three following distinct types of full-text searches: </p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;">    <li>Natural Language Full-Text Searches</li>    <li>Boolean Full-Text searches</li>    <li>Query expansion searches</li></ul><p>In <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/1262-perform-full-text-searches-in-mysql-part-2.html" target="_blank" >Part 2</a>, I described how to perform Natural Language full-text searches in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank" >Navicat for MySQL</a>. Today's blog follows where part 2 left off and covers the next type of full-text searching: Boolean Full-Text searches.</p><h1 class="blog-sub-title">Boolean Mode Described</h1><p>Boolean mode is a more word-driven than the natural language search.  As such, Boolean full-text search supports very complex queries that include Boolean operators. For experienced users, Boolean full-text searching offers a means to perform some very advanced searches.</p><p>Here's how it works:</p><p>To perform a full-text search in the Boolean mode, you include the IN BOOLEAN MODE modifier in the AGAINST  expression. Recall that, in the last installment, we added a full-text index to the film table of the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila sample database</a> so that we could perform full-text searches on the description field. Here's an example that returns all films whose descriptions contain the word "Butler":</p><tr><td align="middle"><img alt="MySQL - Boolean Mode - 1" src="https://www.navicat.com/link/Blog/Image/2019/20191227/boolean_mode_1.jpg" style="max-width: 100%;" /></td></tr><h1 class="blog-sub-title">Some More Complex Examples</h1><p>The above search is simple enough to not require full-text searching.  It gets a lot more interesting once you start doing things like excluding matches that contain certain keywords. For instance, we can find films whose descriptions contain the word "Butler" that are not documentaries by preceding the word "Documentary" with the exclude Boolean operator ( - ):</p><tr><td align="middle"><img alt="MySQL - Boolean Mode Exclude" src="https://www.navicat.com/link/Blog/Image/2019/20191227/boolean_mode_exclude.jpg" style="max-width: 100%;" /></td></tr><p>That returns 61 rows, compared to 73 for our previous query.</p><h1 class="blog-sub-title">Multi-word Matching</h1><p>We can also search for rows whose description match multiple words using the ( + ) include operator.  Prefixing a word with it tells the search engine to only match rows that contain that word. That becomes an important distinction when there are multiple words, such as "+Butler Hunter Waitress".  In that case, all rows whose description contains the word "Butler" and either of the other two words will be returned:</p><tr><td align="middle"><img alt="MySQL - Boolean Mode Multi" src="https://www.navicat.com/link/Blog/Image/2019/20191227/boolean_mode_multi.jpg" style="max-width: 100%;" /></td></tr><p>Contrast the above results with those produced by a query with both the words "Butler" and "Hunter" prefixed with the ( + ) include operator:</p><tr><td align="middle"><img alt="MySQL - Boolean Mode Multi - 2" src="https://www.navicat.com/link/Blog/Image/2019/20191227/boolean_mode_multi_2.jpg" style="max-width: 100%;" /></td></tr><p>Now, matching rows must contain both "Butler" and "Hunter" but not necessarily "Waitress".</p><h1 class="blog-sub-title">A Quick Word on Relevancy Rankings</h1><p>Full-text searches rank results differently for InnoDB than MyISAM because InnoDB full-text search is modeled on the Sphinx full-text search engine, and the algorithms used are based on BM25 and TF-IDF ranking algorithms. </p><p>Some operators affect ranking so that we can further fine-tune results.  For example, we can search for rows that contain the word "Butler" but rank the row lower if it contains the words "Hunter" or "Waitress":</p><tr><td align="middle"><img alt="MySQL - Boolean mode rank lower" src="https://www.navicat.com/link/Blog/Image/2019/20191227/boolean_mode_rank_lower.jpg" style="max-width: 100%;" /></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to perform Boolean full-text searches using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>. Interested in finidng out more about Navicat for MySQL? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-mysql" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p><p>For a full listing of the boolean full-text operators, take a look at the <a class="default-links" href="https://dev.mysql.com/doc/refman/8.0/en/fulltext-boolean.html" target="_blank" >official MySQL docs</a>.</p></body></html>]]></description>
</item>
<item>
<title>Welcome to Navicat Premium 15! Data Visualization</title>
<link>https://www.navicat.com/company/aboutus/blog/1269-welcome-to-navicat-premium-15-data-visualization.html</link>
<description><![CDATA[<b>Dec 10, 2019</b> by Robert Gravelle<br/><br/><p>November 25 is the official launching date for Navicat Premium 15. Currently, version 15 packs a wallop of new features and improvements, most notably in data transfers, the SQL Builder, and modeling. It also adds Data Visualization, Dark Mode and native Linux support. In today's blog we'll learn how the new Data Visualization feature helps us turn our database into visuals that provide valuable insights into our data through a wide variety of charts and graphs.</p><h1 class="blog-sub-title">Work spaces, Data Sources, and Charts</h1><p>Data visualization is the graphical representation of information and data using visual components like charts, graphs, and maps. Data visualization tools provide an accessible way to see and understand trends, outliers, and other patterns in our data.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191210/visualization dashboard.jpg" style="max-width: 100%;"></td></tr><p>In Navicat, data visualizations are organized in a hierarchy of Workspace, Data Sources, and Charts.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191210/charts button.jpg" style="max-width: 100%;"></td></tr><p>Clicking the Charts button displays the Workspace buttons in the Objects toolbar. From there, clicking on New Workspace opens the Charts Workspace window, which is all of the action happens!</p><p>Whereas typical charts are based on a snapshot of data, data visualization can be applied to live data and refreshed at any time.</p><p>In a graphical interface, a workspace is a grouping of related objects to help manage them in one place. In Navicat, it's where you'll find all of your Data Sources, Charts, and Dashboards (more on those in a bit).</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191210/workspace.jpg" style="max-width: 100%;"></td></tr><p>A Data Source can be based on any valid database connection or even on multiple connections.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191210/new data source dialog.jpg" style="max-width: 100%;"></td></tr><p>Once a Data Source's objects are available, you can drag database objects such as tables, views and queries into the main window and fetch data from them. Navicat will auto-detect relationships between database objects.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191210/data source.jpg" style="max-width: 100%;"></td></tr><p>Data may be live or based on an archive. In Archive mode, Navicat takes a snapshot of the current data and bases charts on it.</p><h1 class="blog-sub-title">Charts</h1><p>The main tool of data visualization is charts. That's why Navicat 15 includes all of the major chart types that you need to explore your data. These include (just to name a few):</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Bar</li><li>Stacked bar</li><li>Line</li><li>Area</li><li>Pie</li><li>Donought</li><li>Scatter</li></ul><p>Charts are completely customizable in terms of fonts, text placement, colors, opacity, formatting, filtering, and many more. Here's an example that I created in about five minutes!</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191210/charts.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Dashboards</h1><p>In this age of Big Data, data dashboards are indispensable to aggregate huge amounts of data from disparate sources into a cohesive and digestible whole. A dashboard displays all this data in the form of tables, line charts, bar charts and gauges. A data dashboard is the most efficient way to track multiple data sources because it provides a central location for businesses to monitor and analyze performance.</p><p>In Navicat, a Dashboard is where you can group your charts along with images, shapes, and text to create slideshow presentations for targeted groups, such as developers, data analysts, and management.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191210/dashboard.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we explored Navicat 15's new Data Visualization feature. We'll check out some other new features and improvements next week. In the meantime, feel free to <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">download it</a> and give it a whirl!</p>]]></description>
</item>
<item>
<title>Perform Full-text Searches in MySQL (Part 2)</title>
<link>https://www.navicat.com/company/aboutus/blog/1262-perform-full-text-searches-in-mysql-part-2.html</link>
<description><![CDATA[<b>Oct 15, 2019</b> by Robert Gravelle<br/><br/><p>In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1261-perform-full-text-searches-in-mysql-part-1.html" target="_blank">Part 1</a>, we saw how MySQL provides full-text search capability via FULLTEXT indexing along with three distinct types of full-text searches. In today's blog, we'll learn how to perform Natural Language full-text searches in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>.</p><h1 class="blog-sub-title">Natural Language full-text Searching Defined</h1><p>The idea behind natural language full-text searching is to seek documents (rows) that are relevant to a natural human language query such as "How natural language full-text searches work?". If you've ever used an Internet search engine like Google, this is exactly how it works!</p><p>Relevance is ascertained using a positive floating-point number of zero plus, where zero means that there is no similarity. Relevance can be based on various factors, including the number of words in the document, the number of unique words in the document, the total number of words in the collection, and the number of documents (rows) that contain a particular word.</p><h1 class="blog-sub-title">A MySQL Natural Language Full-text Search Example</h1><p>In MySQL, natural-language full-text searches are performed using the MATCH() and AGAINST() functions. The MATCH() function specifies the column where you want to search, whereas the AGAINST() function determines the <i>search expression</i> to be used.</p><p>The <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila sample database</a> represents a fictional DVD rental store. The film table contains pertinent information about each film in the store's collection. Columns include the film's title, release year, running length, and a description. Here is a sample row in Navicat's Form View. It allows you to view, update, insert, or delete data as a form, in which the current record is displayed as a field name and its value:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191015/film_record.png" style="max-width: 100%;"></td></tr><p style="font-size: 18px;">Indexing the description Column</p><p>In order to search the description field in full-text mode, we first have to create a full-text index on the table. We can easily do that in Navicat as follows:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Open the description table in the Table Designer</li><li>Select the Indexes tab.</li><li>Click the Add Indexes button.</li><li>Let's call our new index "idx_description".</li><li>In the fields textbox, select the description column.</li><li>Select FULLTEXT from the Index Type drop-down:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191015/description_index.jpg" style="max-width: 100%;"></td></tr></li><li>Leave the Index method blank as it is not required for a FULLTEXT index.</li><li>Finally, click the Save button to create the index.</li></ul><p style="font-size: 18px;">Query Time!</p><p>Let's open up the Query Editor and write a query that will lookup rows whose description contains the phrase "Database Administrator". Navicat can help us compose our query by suggesting the names of fields even the functions we require:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191015/auto_complete.jpg" style="max-width: 100%;"></td></tr><p>Here is the final query and results for the phrase "Database Administrator". There are a surprisingly high number of films about DBAs!</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191015/db_admin_query.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Viewing the Scores</h1><p>As explained above, relevance is ascertained using a positive floating-point number of zero plus, where zero means that there is no similarity. We can view the score of each record by adding the MATCH() and AGAINST() functions to the column list, for example:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191015/score.jpg" style="max-width: 100%;"></td></tr><p>This would help us to determine a cut-off point for the closest matches.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to perform Natural Language full-text searches in <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>.</p><p>Interested in Navicat for MySQL? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-mysql" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes!</p>]]></description>
</item>
<item>
<title>Perform Full-text Searches in MySQL (Part 1)</title>
<link>https://www.navicat.com/company/aboutus/blog/1261-perform-full-text-searches-in-mysql-part-1.html</link>
<description><![CDATA[<b>Oct 11, 2019</b> by Robert Gravelle<br/><br/><p>Full-text Search, or FTS, is one of the techniques employed by search engines to find results in their database(s). You too can harness the power of FTS to search for patterns that are too complex for the Like operator. In today's blog, we'll learn how full-text searching is implemented in MySQL. In part 2, we'll try our hand at some queries using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> as our database client.</p><h1 class="blog-sub-title">Full-text Searching Explained</h1><p>The purpose of FTS is to fetch documents that only loosely match search criteria against textual data. Hence, searching for "cars and trucks", would return results which contain the words separately, as in just "cars" or "trucks", or that contain the words in a different order ("trucks and cars"), or contain variants of the search terms, e.g. "car" and "truck". This allows businesses to guess at what the user is searching for and return more relevant results in a faster time.</p><p>Database Management Systems (DBMS) like MySQL do allow <i>quasi</i> text lookups using LIKE expressions. There are however some drawbacks to the Like clause:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>They tend to under-perform on large datasets.</li><li>They're also limited to matching the user's input exactly, which means a query might yield no results even if there are in fact records with relevant information.</li></ul><h1 class="blog-sub-title">Full-Text Searching in MySQL</h1><p>In order to perform full-text searches in MySQL, you have to add a FULLTEXT index to fields that will support full-text searching. Moreover, full-text indexes can be used only with MyISAM and InnoDB tables. Finally, note that full-text indexes can be created only for CHAR, VARCHAR, or TEXT columns. A FULLTEXT index definition can be given either in the CREATE TABLE statement, or added later using the ALTER TABLE or CREATE INDEX commands. A tip for large data sets: it's much faster to load your data into a table that does not have a FULLTEXT index and then create the index after the data has been loaded, rather than create the FULLTEXT index first and then load the data.</p><p>There are three distinct types of full-text searches:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Natural Language Full-Text Searches</li><li>Boolean Full-Text searches</li><li>Query expansion searches</li></ul><p>We'll cover each of these in turn as we go through the list in part 2.</p><h1 class="blog-sub-title">A Basic Example</h1><p>In the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a>, the film table contains information about each movie in the store's film collection, including its title, running time, and description. We can take a look at a film table record in detail using the Form View. Available in Full Version, the Form View allows us to view, update, insert, or delete data as a form, where the current record is displayed in full detail. There's also navigation bar for switching between records quickly.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191011/film_table.jpg" style="max-width: 100%;"></td></tr><p>We can add full-text searching capability on the description column by adding the FULLTEXT index to it. We could issue either the ALTER TABLE or CREATE INDEX commands, but it Navicat, there's an easier way! The Table Designer contains a number of tabs pertaining to different table objects. These include, column definitions, indexes, foreign keys, triggers, options, and more. We can add a full-text index on the description field by selecting it using the Field Selector dialog, and the choosing FULLTEXT from the Index Type drop-down:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191011/film_indexes.jpg" style="max-width: 100%;"></td></tr><p>Be sure to leave the Index method blank.</p><p>Click the Save button to create the new index:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191011/desc_index.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>Now that we've prepped the database for full-text searching, we'll learn how to use Full-Text Search Functions in part 2.</p><p>Interested in Navicat for MySQL? You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-mysql" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes.</p>]]></description>
</item>
<item>
<title>Monitor your SQL Server Instances with Navicat Monitor</title>
<link>https://www.navicat.com/company/aboutus/blog/1260-monitor-your-sql-server-instances-with-navicat-monitor.html</link>
<description><![CDATA[<b>Oct 4, 2019</b> by Robert Gravelle<br/><br/><p>Navicat Monitor, the agentless database server instance monitoring tool for MySQL and MariaDB recently added support for SQL Server. Hence, it can now monitor database process and system resources for locally hosted SQL Server instances as well as those provided via Amazon Web Services (AWS). Today's blog will provide a quick guide for connecting to an SQL Server instance in order to monitor its performance using Navicat Monitor 2.0.</p><h1 class="blog-sub-title">Connecting to an SQL Server Instance</h1><p>To start monitoring an SQL Server instance, you have to configure the connection via the New Instance button. You'll see the new SQL Server item in the list. It will allow you to connect to both locally hosted SQL Server instances as well as those provided via Amazon Web Services (AWS):</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191004/new_instance.jpg" style="max-width: 100%;"></td></tr><p>The New Instance dialog states the database type now, i.e "New SQL Server Instance", rather than the generic "New Instance" title of previous Navicat Monitor versions. While the fields and options are laid out using a very similar layout between databases, closer inspection reveals that they are tailored to the specific type of database chosen, in this case, SQL Server:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191004/new_instance_dialog.jpg" style="max-width: 100%;"></td></tr><p>Note that the Host Name defaults to "localhost". You can change it to your instance server name, and that includes that of an AWS instance, i.e. "sample-instance.abc2defghije.us-west-2.rds.amazonaws.com". In that way, both local and remote connections are fully supported.</p><p>For more information on connecting to your database instance(s), take a look at the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/967-configure-an-instance-in-navicat-monitor-for-mysql-mariadb" target="_blank">Configure an Instance in Navicat Monitor for MySQL/MariaDB</a> blog article.</p><h1 class="blog-sub-title">The Dashboard</h1><p>To accommodate SQL Servers, there is a new SQL Server Filter in the Filters bar. Filters allow you to only show database instances of the selected type(s):</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191004/filters.jpg" style="max-width: 100%;"></td></tr><p>All of the Dashboard functionality and features remain unchanged. You can still sort by Alert Severity, Name, or Instance Type, and instance cards may still be dragged from one Group to another using the handlebar.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191004/handlebar.jpg" style="max-width: 100%;"></td></tr><p>You'll know that you can drag the card when the mouse pointer turns into an arrow:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191004/arrow.jpg" style="max-width: 100%;"></td></tr><p>You can choose exactly which metrics are displayed in the Instance Card by clicking on CARD DESIGN. The Card Design dialog metrics are now divided into two columns: one for MySQL/MariaDB and one for SQL Server:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20191004/card_design.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how the latest version of Navicat Monitor has evolved to support SQL Server.</p><p>Navicat Monitor version 2.0 is now available for sales at <a class="default-links" href="https://www.navicat.com/en/store/navicat-monitor" target="_blank">Navicat Online Store</a> and is priced at US$499/token (commercial) and US$199/token (non-commercial). 1 token is needed to unlock 1 MySQL Server / 1 MariaDB Server / 1 SQL Server.</p><p>For more details on Navicat Monitor and all its features, please visit: <a class="default-links" href="https://www.navicat.com/en/discover-navicat-monitor" target="_blank">https://www.navicat.com/en/discover-navicat-monitor</a>.</p><p>You can download a fully functional 14-day free trial at <a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">https://www.navicat.com/en/download/navicat-monitor</a>. Give it a try!</p>]]></description>
</item>
<item>
<title>Working with Cursors in MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/1258-working-with-cursors-in-mongodb.html</link>
<description><![CDATA[<b>Sep 23, 2019</b> by Robert Gravelle<br/><br/><p>SQL queries often return more than one row of data from the database server. Relational databases provide cursors as a means for iterating over each row of the results set. Does that mean that MongoDB users are out of luck? As it turns out, MongoDB's db.collection.find() function returns a cursor. In MongoDB, cursors themselves provide additional functionality for processing individual rows. In today's blog, we'll learn how to work with MongoDB cursors in <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">Navicat for MongoDB</a>.</p><h1 class="blog-sub-title">A Simple Iteration Example</h1><p>Executing a query via the db.collection.find() function returns a pointer to the collection of documents returned, which is a cursor. The default behavior of a cursor is to allow an automatic iteration across the results of the query. However, developers can explicitly go through the items returned in the cursor object. One way to do that, is to use the forEach() cursor method.</p><p>In Navicat, it's easy to use the find() method. For instance, you can drag a pre-defined snippet into the Query Editor:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190923/find_method.jpg" style="max-width: 100%;"></td></tr><p>Alternatively, you can use the Find Builder. Just select the Collection or View to fetch all the documents:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190923/find_builder.jpg" style="max-width: 100%;"></td></tr><p>From there, you can chain the forEach() directly to the results cursor. In the following example, the three documents in our collection are printed to the console. You can view the output in the Print Output tab:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190923/for_each.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">A Loop of a Different Kind</h1><p>Like all JavaScript objects, a cursor may be stored in a variable for later use. Other JavaScript constructs like while loops are also fully supported thanks to the cursor.hasNext() and cursor.next() methods. As we see in this example, hasNext() informs the loop tester whether or not there is another document to iterate over; next() returns said document.</p><tr><td align="https://www.navicat.com/link/Blog/Image/2019/20190923/while.jpg" style="max-width: 100%;"></td></tr><p>The printjson() helper method is a convenience method that replaces print(tojson()). It basically outputs documents exactly as they are stored in the database, except that they are represented as JSON rather than BSON, the binary equivalent.</p><h1 class="blog-sub-title">Updating Data</h1><p>Recall that db.collection.find() returns a pointer to the collection of documents returned. As such, document fields are fully editable. Hence, we can invoke the forEach() method to update documents that match the specified criteria. Here's a function that updates documents where the name equals "Tom Smith":</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190923/update.jpg" style="max-width: 100%;"></td></tr><p>In our case, there is only one matching document, but, in theory, there could be many.</p><h1 class="blog-sub-title">Data Transformation</h1><p>One type of data transformation is to simplify a dataset's structure in order to make it easier to consume. To do that, we can use the map() function. This example applies the split() function to the name field to break it down into its first and last constituents. Then, the reverse() and join() methods convert "Tom Smithers" to "Smithers, Tom":</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190923/map.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we saw how cursors returned by the db.collection.find() method come packed with all sorts of methods to iterate over documents for the purposes of printing, modifying, deleting, or transforming their contents. Interested in taking Navicat for MongoDB for a spin? You can download a free trial <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">here</a>!</p>]]></description>
</item>
<item>
<title>Introducing Navicat Data Modeler 3.0!</title>
<link>https://www.navicat.com/company/aboutus/blog/1251-introducing-navicat-data-modeler-3-0.html</link>
<description><![CDATA[<b>Sep 10, 2019</b> by Robert Gravelle<br/><br/><p>Being the dedicated database developer and/or administrator that you are, I don't need to remind you that the rigorous application of the principles of sound database design via data modeling is one of the cornerstones of data management. To that end, the emergence of specialized software such as Navicat Data Modeler have made the process much easier to accomplish.</p><p>It's been around for some time now, so I have written about it a few times, first in a <a class="default-links" href="https://www.databasejournal.com/features/mysql/simplifying-mysql-database-design-using-a-graphical-data-modeling-tool.html" target="_blank">Database Journal article</a>, and then on the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/669-create-a-model-from-a-database-in-navicat" target="_blank">Navicat Blog</a>. Now that <a class="default-links" href="https://www.navicat.com/en/download/navicat-data-modeler-3-beta" target="_blank">version 3.0</a> is in beta, let's explore what it brings to the table, in particular, these 3 exciting new features:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Database Synchronization (just like the one found in Navicat Database clients!)</li><li>Dark mode UI option</li><li>Native Linux support</li></ul><h1 class="blog-sub-title">Database Synchronization</h1><p>One of Navicat Modeler's greatest strengths is its ability to generate DDL (Data definition language) statements to create database objects from a Diagram. Version 3 goes one step further by providing Structure Synchronization that brings the schema of two databases in sync. This is a very useful feature for bringing one or more databases up-to-date with one to which you've applied design changes.</p><p>A wizard guides you through the process:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>The first screen is where we choose the source model and target database:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190910/synch_dbs.jpg" style="max-width: 100%;"></td></tr></li><br/><li>The second screen presents a graphical comparison of databases objects as well as the DDL statements and deployment scripts:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190910/updates_to_synch.jpg" style="max-width: 100%;"></td></tr></li><br/><li>The next screen shows all of the DDL statements that will be applied to the target database in order to synchronize its structure with the model:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190910/deploy_script.jpg" style="max-width: 100%;"></td></tr><br/>At this point you can also set deployment options and even edit the script to suit your exact requirements. From there, you can recompare the target database to your changes, taking into account your manual script edits.</li><br/><li>A detailed message log and process statistics are provided for the script's execution:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190910/message_log.jpg" style="max-width: 100%;"></td></tr></li></ul><h1 class="blog-sub-title">Dark Mode UI Option</h1><p>A dark theme displays dark surfaces across the majority of a UI, as opposed to the mostly white surfaces of Windows themes. Dark themes have become increasingly popular in recent years, for several good reasons!</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Dark themes reduce the luminance emitted by device screens, while still meeting minimum color contrast ratios.</li><li>They help improve visual ergonomics by reducing eye strain, adjusting brightness to current lighting conditions, and facilitating screen use in dark environments  all while conserving battery power.</li><li>Devices with OLED screens benefit from the ability to turn off black pixels at any time of day.</li></ul><p>You can choose between the traditional Windows (light) theme and the new Dark Theme on the General options screen:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190910/theme_options.jpg" style="max-width: 100%;"></td></tr><p>After you restart the application, you'll see your new UI theme:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190910/dark_theme.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Native Linux support</h1><p>Whereas previous versions of Navicat Modeler required Wine to run on Linux systems. Wine is a compatibility layer capable of running Windows applications on several POSIX-compliant operating systems, including Linux. Navicat Modeler 3.0 is s true Linux application, so users can now enjoy a UI that better matches those of the Linux system!</p><h1 class="blog-sub-title">Conclusion</h1><p>Navicat Data Modeler 3.0 adds several exciting features to an already stellar modeling tool. Interested in trying out Navicat Data Modeler 3.0? You can download it <a class="default-links" href="https://www.navicat.com/en/download/navicat-data-modeler-3-beta" target="_blank">here</a>!</p>]]></description>
</item>
<item>
<title>Using the SQL CASE Statement</title>
<link>https://www.navicat.com/company/aboutus/blog/1249-using-the-sql-case-statement.html</link>
<description><![CDATA[<b>Sep 5, 2019</b> by Robert Gravelle<br/><br/><p>CASE is a Control Flow statement that acts a lot like an IF-THEN-ELSE statement to choose a value based on the data. The CASE statement goes through conditions and returns a value when the first condition is met. So, once a condition is true, it will short circuit, thereby ignoring later clauses, and return the result. As we'll see in today's blog, it can be used to test for conditions as well as discrete values.</p><h1 class="blog-sub-title">Basic Syntax</h1><p>The CASE statement comes in two flavors: the first evaluates one or more conditions and returns the result for the first condition that is true. If no condition is true, the result after ELSE is returned, or NULL if there is no ELSE part:</p><p><font face="monospace">CASE<br/>&nbsp;&nbsp;&nbsp;&nbsp;WHEN condition1 THEN result1<br/>&nbsp;&nbsp;&nbsp;&nbsp;WHEN condition2 THEN result2<br/>    &nbsp;&nbsp;&nbsp;&nbsp;WHEN conditionN THEN resultN<br/>    &nbsp;&nbsp;&nbsp;&nbsp;ELSE result<br/>END;</font></p><p>The second CASE syntax returns the result for the first value=compare_value comparison that is true. If no comparison is true, the result after ELSE is returned, or NULL if there is no ELSE part:</p><p><font face="monospace">CASE compare_value<br/>    &nbsp;&nbsp;&nbsp;&nbsp;WHEN condition1 THEN result1<br/>    &nbsp;&nbsp;&nbsp;&nbsp;WHEN condition2 THEN result2<br/>    &nbsp;&nbsp;&nbsp;&nbsp;WHEN conditionN THEN resultN<br/>    &nbsp;&nbsp;&nbsp;&nbsp;ELSE result<br/>END; </font></p><h1 class="blog-sub-title">Some Examples</h1><p>To try out the CASE statement, we'll be writing some queries against the <a class="default-link" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a> using <a class="default-link" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>. It's a powerful database development and administration tool that can simultaneously connect to most popular databases, including MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, and SQLite databases. It's also compatible with many cloud databases like Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud and MongoDB Atlas.</p><p style="font-size: 18px;">Syntax 1</p><p>Here's a query that selects a list of movie titles, along with their release year and rental price:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190905/basic_query.jpg" style="max-width: 100%;"></td></tr><p>We'll add a column that splits rental prices into three categories: "discount", "regular", and "premium". The price ranges are:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>less than $2.99</li><li>between $2.99 and $4.99</li><li>$4.99 and up</li></ul><p>To help with the CASE statement, Navicat provides Code Snippets that you can simply drag & and drop into the SQL editor. Although you can create your own, Navicat comes with many standard SQL statements, including DDL and flow control statements. In fact, you'll find the CASE statement at the top of the Flow Control list:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190905/code_snippets.jpg" style="max-width: 100%;"></td></tr><p>After you place the code snippet into the editor, editable sections are color coded to help identify them. You can use the Tab key to move from one to the next.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190905/inserted_case_statement.jpg" style="max-width: 100%;"></td></tr><p>Since the statements are generic in nature, you may have to modify it slightly to suit your particular database type. Here is the complete CASE statement and query for MySQL:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190905/case_prices.jpg" style="max-width: 100%;"></td></tr><p style="font-size: 18px;">Syntax 2</p><p>The 2nd CASE syntax is ideal for testing discrete values against two or more conditions. For example, we could use it to add a target audience column based on the film rating:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190905/target_audience.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned how the SQL CASE Statement can be employed to choose a value based on the underlying data. Example SQL statements were written in <a class="default-link" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>. It helps you code fast with Code Completion and customizable Code Snippets by getting suggestions for keywords and stripping the repetition from coding. You can <a class="default-link" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes.</p>]]></description>
</item>
<item>
<title>Navicat Data Modeler 3 Beta was released today!</title>
<link>https://www.navicat.com/company/aboutus/blog/1250-navicat-data-modeler-3-beta-was-released-today.html</link>
<description><![CDATA[<b>Sep 3, 2019</b><br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190903/NDM3 beta.png" style="max-width: 100%;"></td></tr><p>Navicat Data Modeler 3.0 Highlights:</p><ul style="list-style-type: circle; margin-left: 24px; line-height: 24px"><li>a new design for Synchronize to Database feature which delivers a full picture of database differences, and generate scripts to update the destination database;</li><li>set dark theme for your default viewing preference to protect your eyes from the traditionally blinding whiteness of computer;</li><li>native Linux support is added to provide a UI that better matches with user experiences of the Linux system.</li><li>and more.</li></ul><p>Try it now - <a class="default-links" href="https://www.navicat.com/en/download/navicat-data-modeler-3-beta" target="_blank">https://www.navicat.com/en/download/navicat-data-modeler-3-beta</a></p>]]></description>
</item>
<item>
<title>Validating Data using Triggers in MySQL 8</title>
<link>https://www.navicat.com/company/aboutus/blog/1236-validating-data-using-triggers-in-mysql-8.html</link>
<description><![CDATA[<b>Aug 21, 2019</b> by Robert Gravelle<br/><br/><p>There are some very good reasons why data validation is best performed at the database level rather than at the application level. For instance, the same data source may be accessed by multiple applications. Therefore, you can rely on the data being consistent and valid without having to depend on validation logic being applied on the application side, which might not be consistent across different implementations. Moreover, triggers are ideal for validation because they can be executed before data is inserted or updated. Triggers can also can also prevent a database transaction from being applied while providing an error message.</p><p>In today's blog, we'll write a trigger in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blaknk">Navicat Premium</a> that will validate insert operations on a MySQL database table.</p><h1 class="blog-sub-title">Designing the Trigger</h1><p>We'll be working with the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blaknk">Sakila sample database</a>. It contains a number of related tables themed around a fictional video rental store. Here they are in the Navicat Premium navigation pane:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190821/sakila_tables.jpg" style="max-width: 100%;"></td></tr><p>We'll be adding our trigger to the film table. If you open it in the Designer, you'll see that there are several tabs there:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190821/designer_tabs.jpg" style="max-width: 100%;"></td></tr><p>Clicking the Triggers tab reveals that there are already a few triggers defined for that table. For instance, the ins_film trigger copies film information to the film_text table on data inserts. This is a common task allocated to triggers.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190821/existing-triggers.jpg" style="max-width: 100%;"></td></tr><p>Now we'll add a trigger that will make sure that foreign films are inserted with the original_language_id.</p><p>A film's language is actually stored in the language lookup table:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190821/film_language_relationship.jpg" style="max-width: 100%;"></td></tr><br/><br/><font face="monospace"><body><table border="0"><tr><td>language_id</td><td>&nbsp;&nbsp;</td><td>name</td><td>&nbsp;&nbsp;</td><td>last_update</td></tr><tr><td>1</td><td>&nbsp;&nbsp;</td><td>English</td><td>&nbsp;&nbsp;</td><td>2006-02-15 05:02:19</td></tr><tr><td>2</td><td>&nbsp;&nbsp;</td><td>Italian</td><td>&nbsp;&nbsp;</td><td>2006-02-15 05:02:19</td></tr><tr><td>3</td><td>&nbsp;&nbsp;</td><td>Japanese</td><td>&nbsp;&nbsp;</td><td>2006-02-15 05:02:19</td></tr><tr><td>4</td><td>&nbsp;&nbsp;</td><td>Mandarin</td><td>&nbsp;&nbsp;</td><td>2006-02-15 05:02:19</td></tr><tr><td>5</td><td>&nbsp;&nbsp;</td><td>French</td><td>&nbsp;&nbsp;</td><td>2006-02-15 05:02:19</td></tr><tr><td>6</td><td>&nbsp;&nbsp;</td><td>German</td><td>&nbsp;&nbsp;</td><td>2006-02-15 05:02:19</td></tr></table></body></font><p>Any language_id other than 1 should have an original_language_id as well. Our trigger will check for a value in the original_language_id column.</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>In the Design View of the film table, select the Triggers tab and click on the Add Trigger button. <br/>That will add a new row in the triggers table.</li><li>Assign a name of "ins_validate_language", select BEFORE from the Fires drop-down, and click on the Insert checkbox.</li><li>Here's the trigger Body:<br/><font face="monospace">BEGIN<br/>&nbsp;&nbsp;IF NEW.language_id != 1 AND NEW.original_language_id IS NULL<br/>&nbsp;&nbsp;THEN<br/>&nbsp;&nbsp;&nbsp;&nbsp;SIGNAL SQLSTATE '45000'<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SET MESSAGE_TEXT = 'Original language id is required for Foreign language films.';<br/>&nbsp;&nbsp;END IF;<br/>END<br/></font><br/>Here is our Trigger with all of the fields filled in:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190821/ins_validate_language.jpg" style="max-width: 100%;"></td></tr></li><li>Click the Save button to create the trigger.</li></ul><h1 class="blog-sub-title">Testing the Trigger</h1><p>Now it's time to verify that our trigger works as expected. To test it, let's add a new row to the film table with a foreign language_id.</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Open the film table in the Editor.</li><li>Navigate to the last row.</li><li>Select the Form View and click on the Plus (+) button to add a new row:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190821/record_form.jpg" style="max-width: 100%;"></td></tr></li><li>In the form, you only need to enter a title and language_id; all other fields have default values or are not required.</li><li>When you click the Add (checkmark) button, you should see our error message:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190821/alert_message.jpg" style="max-width: 100%;"></td></tr></li></ul><h1 class="blog-sub-title">Conclusion</h1><p>Triggers are ideal for validation because they can be executed before data is inserted or updated. We saw how a trigger can be employed for validation purposes by writing a trigger in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>.</p>]]></description>
</item>
<item>
<title>Using the SQL Limit Keyword</title>
<link>https://www.navicat.com/company/aboutus/blog/1215-using-the-sql-limit-keyword.html</link>
<description><![CDATA[<b>Jul 30, 2019</b> by Robert Gravelle<br/><br/><p>The SQL LIMIT clause constrains the number of rows returned by a SELECT statement. For Microsoft databases like SQL Server or MSAccess, you can use the SELECT TOP statement to limit your results, which is Microsoft's proprietary equivalent to the SELECT LIMIT statement. However, for most relational databases (DBMSes), including MySQL/MariaDB, PostgreSQL, and Oracle, the SQL LIMIT clause can solve several problems. In today's blog, we'll explore a few of these, using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL</a>.</p><h1 class="blog-sub-title">Keeping Result Sets Manageable</h1><p>In many production and test databases, table sizes routinely reach millions of rows and have dozens of columns. For that reason, it's never a good idea to run <font face="monospace">SELECT *</font> queries against your database(s). Keeping the results down to one hundred or one thousand helps keep result sets down to a size that's more easily digestible.</p><p>Navicat development and administration tools automatically limit result sets by default in order to prevent straining your database server(s). You can see it in action when you open a table. At the bottom of the application window, the SQL that Navicat executed to fetch the table rows is displayed. It ends with the "LIMIT 1000 OFFSET 0", which means that only the first 1000 records are displayed.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190730/table_limit.jpg" style="max-width: 100%;"></td></tr><p>You can change the default number of records to show or turn off limiting entirely on the RECORDS Options screen:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190730/limit_records.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Top N Queries</h1><p>As the name implies, top-N queries are those that attempt to find the top number of records from a result set. This could be top 1, top 3, top 5, top 10, or top [any] number. Some common examples are:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Find the top 10 highest paid employees</li><li>Find the top 20 most profitable customers</li><li>Find the top 3 users on the system</li></ul><p>These queries are hard to do with just an ORDER BY and WHERE clause alone, but not using the LIMIT clause. Here's an example:</p><p style="font-size: 18px;"><b>Top 5 Unique Job IDs</b></p><p>Let's say that we wanted to find the top unique Job IDs in a table. Here's a query that does just that:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190730/top_job_ids.jpg" style="max-width: 100%;"></td></tr><p>The DISTINCT keyword makes sure that duplicate IDs are removed from the results.</p><h1 class="blog-sub-title">Closest Rows to a Given Date</h1><p>It is possible to locate rows closest to a given date using LIMIT. You just have to compare row dates to the given date, order the results, and limit the results to the number of rows that you'd like to see. Here's a query that returns rows whose creation_date is greater than '2018-01-01':</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190730/closest_dates.jpg" style="max-width: 100%;"></td></tr><p>In this case, 2018-01-02 was the closest later date.</p><h1 class="blog-sub-title">Bottom N Queries</h1><p>The corollary of top N queries are bottom N queries. These are queries that attempt to find the bottom number of records from a result set. We can convert our Top queries into their Bottom equivalents quite easily!</p><p style="font-size: 18px;">Bottom 5 Unique Job IDs</p><p>To return the bottom 5 unique job IDs, all you need to do is remove the DESC modifier in the ORDER BY clause. That will order records in ascending (ASC) order, as is default:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190730/bottom_job_ids.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Closest Rows below a Given Date</h1><p>Locating the closest rows before a given date is likewise fairly easy. We just need to change the greater than '&gt;' operator to less than '&lt;' and reorder results in descending (DESC) order:<tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190730/closest_dates_before.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we explored a few uses for the LIMIT clause, using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-postgresql" target="_blank">Navicat for PostgreSQL</a>. Like to give Navicat for PostgreSQL a try? You can evaluate it for 14 days completely free of charge!</p>]]></description>
</item>
<item>
<title>The SQL Self JOIN</title>
<link>https://www.navicat.com/company/aboutus/blog/1214-the-sql-self-join.html</link>
<description><![CDATA[<b>Jul 24, 2019</b> by Robert Gravelle<br/><br/><p>There are times when you need to fetch related data that reside in the same table. For that, a special kind of join is required called a self join. In today's blog, we'll learn how to write a query that includes a self join using Navicat Premium as the database client.</p><h1 class="blog-sub-title">Syntax</h1><p>The basic syntax of SELF JOIN is as follows:</p><p><font face="monospace">SELECT a.column_name, b.column_name...<br/>FROM table1 a, table1 b<br/>WHERE a.common_field = b.common_field;<br/></font></p><p>In addition to the linking on common fields, the WHERE clause could contain other expressions based on your specific requirements.</p><h1 class="blog-sub-title">An Example</h1><p>In the Sakila Sample Database, there is a customer table that contains customer-related information such as their name, email, and address. Here is are the columns in the Navicat Table Desginer:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190724/customer_table_design.jpg" style="max-width: 100%;"></td></tr><p>We can use a self join to retrieve all customers whose last name matches the first name of another customer. We achieve this by assigning aliases to the customer table. The aliases allow us to join the table to itself because they give the table two unique names, which means that we can query the table as though it were two different tables. These are then joined on the last_name and first_name fields:</p><p><font face="monospace">SELECT<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c1.customer_id as customer_1_id,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c1.first_name  as customer_1_first_name,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c1.last_name   as customer_1_last_name,<br/>&nbsp;&nbsp;c2.customer_id as customer_2_id,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c2.first_name  as customer_2_first_name,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c2.last_name<br/>FROM customer c1,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;customer c2<br/>WHERE c1.last_name = c2.first_name<br/>ORDER BY c1.last_name;<br/></font></p><p>Navicat's auto-complete feature is really useful when writing your queries because it helps avoid typos and having to guess at column names. For this reason, it's especially useful for selecting fields:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190724/auto-complete.jpg" style="max-width: 100%;"></td></tr><p>Executing the query generates the following results:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190724/results.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Using an INNER JOIN</h1><p>Another way to link a table to itself is to use an INNER JOIN. If you're not sure how to do that, Navicat can help! It provides a useful tool called Query Builder for building queries visually. It allows you to create and edit queries without much knowledge of SQL. The database objects are displayed in left pane. Whereas in the right pane, it is divided into two portions: the upper Diagram Design pane, and the lower Syntax pane.</p><p>We can simply drag the last_name field of the first table alias to the first_name of the second table alias and the Query Builder will generate the JOIN for us!</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190724/query_builder.jpg" style="max-width: 100%;"></td></tr><p>Here's the generated SQL statement:</p><p><font face="monospace">SELECT<br/>c1.customer_id AS customer_1_id,<br/>c1.first_name AS customer_1_first_name,<br/>c1.last_name AS customer_1_last_name,<br/>c2.customer_id AS customer_2_id,<br/>c2.first_name AS customer_2_first_name,<br/>c2.last_name<br/>FROM<br/>customer AS c1<br/>INNER JOIN customer AS c2 ON c1.last_name = c2.first_name<br/>ORDER BY<br/>customer_1_last_name ASC<br/>;<br/></font></p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to write a query that includes a self join using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>. Give Navicat Premium a try. You can evaluate it for 14 days completely free of charge!</p>]]></description>
</item>
<item>
<title>Managing Multiple Databases from a Single Application</title>
<link>https://www.navicat.com/company/aboutus/blog/1115-managing-multiple-databases-from-a-single-application.html</link>
<description><![CDATA[<b>Jul 16, 2019</b> by Robert Gravelle<br/><br/><p>Even if your company is still relatively small, it may already be in the process of outgrowing the database that you started with. As this happens, new applications will interface with a larger and more powerful database. Meanwhile, the original database will still play a (reduced) role in business activities. Eventually, you will need to manage a variety of databases, each with its own features, specialized syntax, and connection protocols.</p><p>Managing multiple databases either necessitates that you employ multiple client applications or find one that can accommodate all of the databases that you use. One such tool is <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" traget="_blank">Navicat Premium</a>. Not only does it support most of the major Database Management Systems (DBMSes), but it is one of the few tools that can simultaneously connect to all of them at once!</p><p>In today's blog, we will examine some of the challenges of managing multiple databases and provide some practical examples of how to overcome them using Navicat Premium.</p><h1 class="blog-sub-title">Connecting to Multiple Databases</h1><p>Establishing connections to multiple databases is not a trivial task because each database product implements its own connection parameters. For instance, some databases require a default database, whereas others do not. Navicat smooths out these differences by providing a consistent Connection dialog for each database type, with only a few minor variations between screens. Here's a comparison of the New Connection dialog for MySQL on Windows and SQL Server on macOS:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190716/mysql-connect.gif" style="max-width: 100%;"></td><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190716/sql_server-connect.png" style="max-width: 100%;"></td></tr><p>For more information on connecting to multiple databases, please see this <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1061-connecting-to-multiple-databases-from-a-single-tool.html" target="_blank">recent blog</a>.</p><h1 class="blog-sub-title">Querying across Multiple Databases</h1><p>When it comes to SQL queries, most DBMSes support a standardized set of SQL statements and functions. Beyond that, many database vendors try to set their product(s) apart by including an additional set of extended SQL features. For example, a pivot table is a table of statistics that summarizes the data of a more extensive table (such as from a database, spreadsheet, or business intelligence program). This summary might include sums, averages, or other statistics, which the pivot table groups together in a meaningful way.</p><p>Database support for pivot tables varies greatly across DBMSes, as described below:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>PostgreSQL, an object-relational database management system, allows the creation of pivot tables using the tablefunc module.</li><li>MariaDB, a MySQL fork, allows pivot tables using the CONNECT storage engine.</li><li>Microsoft Access supports pivot queries under the name "crosstab" query.</li><li>Oracle database and SQL Server support the PIVOT operation.</li><li>Some popular databases that do not directly support pivot functionality, such as SQLite can usually simulate pivot functionality using embedded functions, dynamic SQL or subqueries.</li></ul><p>In Navicat, you can query multiple databases with one statement, as long as you can join the various tables on a common field, and that the syntax is supported by all of the databases included in the query:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190604/union_query_results.jpg" style="max-width: 100%"></td></tr><p><a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1059-how-to-query-across-multiple-databases.html" target="_blank">Here</a> a blog all about querying multiple databases.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we examined some of the challenges of managing multiple databases and reviewed some practical examples of how to overcome them using Navicat Premium.</p><p><a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> is available for the Windows, macOS, and Linux operating systems and supports MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, and SQLite databases. It's also compatible with cloud databases like Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud and MongoDB Atlas. <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Try it</a> today!</p>]]></description>
</item>
<item>
<title>More Essential SELECT Queries for Every Database Developer's Toolkit</title>
<link>https://www.navicat.com/company/aboutus/blog/1064-more-essential-select-queries-for-every-database-developer-s-toolkit.html</link>
<description><![CDATA[<b>Jun 19, 2019</b> by Robert Gravelle<br/><br/><p>A short time ago, we explored <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1057-some-select-queries-you-must-know.html" target="_blank">Some SELECT Queries You Must Know</a>. These included determining the lowest and highest value for a column, as well as grouping results by category. Today's blog presents a couple more queries, along with a tip to make your queries almost write themselves!</p><h1 class="blog-sub-title">Get All User Created Tables</h1><p>These include tables that are part of user created databases, that is to say, that are not part of system database schemas. The exact syntax varies by vendor, but here are a couple of examples to give you the idea.</p><p>In SQL Server, this simple one-liner will do the job:</p><p><font face="monospace">SELECT NAME FROM sys.objects WHERE TYPE='U'</font></p><p>MySQL's syntax is a bit more wordy because you have to specify the system databases in order to omit their tables:</p><p><font face="monospace">SELECT * from information_schema.tables<br/>WHERE table_schema not in ('information_schema', 'mysql', 'performance_schema')<br/>ORDER BY table_schema, table_name;</font></p><p>So why would you want to query user tables? In addition to table names, the MySQL query returns a great deal of useful information about each table, including the number of rows, the storage engine, their size, the last auto_increment value, and more!</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190619/user_tables.jpg" style="max-width: 100%;"></td></tr><p>If you only want the table names in MySQL, that's easily done. You can narrow down the list using the WHERE clause, or, you can issue the following command:</p><p><font face="monospace">SHOW FULL TABLES IN [database_name] WHERE TABLE_TYPE LIKE 'BASE TABLE';</font></p><h1 class="blog-sub-title">Get All View Names</h1><p>Again, the exact syntax varies by vendor, but a couple of examples will provide a good starting point.</p><p>Here's the SQL Server syntax:</p><p><font face="monospace">SELECT * FROM sys.views</font></p><p>In MySQL we can narrow down the list to views by limiting the TABLE_TYPE to 'VIEW'. We still have to exclude the sys database as it contains a number of views:</p><p><font face="monospace">SELECT * FROM information_schema.`TABLES`<br/>WHERE TABLE_TYPE = 'VIEW'<br/>AND table_schema != 'sys';</font></p><p>Here are the results in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190619/views.jpg" style="max-width: 100%;"></td></tr><p>Looking for views of a specific database? You can just change the WHERE clause to:</p><p><font face="monospace">AND TABLE_SCHEMA LIKE '[database_name]'</font></p><p>The following command will also work:</p><p><font face="monospace">SHOW FULL TABLES IN [database_name] WHERE TABLE_TYPE LIKE 'VIEW';</font></p><p>That will return the view names and their type, which is always "view":</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190619/views_in_db.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">General Tip: Using Table Aliases</h1><p>Writing SQL queries is an art as much as a science. There are some good habits that you can develop that will pay dividends in productivity and/or ease of writing. For example, table (or SQL) aliases are used to give a table, or a column in a table, a temporary name that only exists for the duration of the query. Aliases may be employed to make column names more readable and less error prone.</p><p>All you need to do is include the "AS [alias_name]" after the table name in the FROM clause:</p><p><font face="monospace">SELECT column_name(s)<br/>FROM table_name AS alias_name;</font></p><p>Aliases really earn their keep when you use a Query Editor, like Navicat's. Suppose that we want to select some fields from the actor table. First, we would leave the column list empty and we would enter the FROM clause, complete with a table alias:</p><p><font face="monospace">SELECT<br/><br/>FROM actor as a</font></p><p>Now, when we enter our shorter table alias, Navicat presents an auto-complete list with all the table columns:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190619/auto_complete.jpg" style="max-width: 100%;"></td></tr><p>Writing queries in this way is not only faster, but it eliminates the chance of misspelling a column!</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned a couple of queries and a tip to make our SELECTs almost write themselves using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> as the database client. Navicat helps you code fast with Code Completion and customizable Code Snippets by getting suggestions for keywords and stripping the repetition from coding. You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes.</p>]]></description>
</item>
<item>
<title>Connecting to Multiple Databases from a Single Tool</title>
<link>https://www.navicat.com/company/aboutus/blog/1061-connecting-to-multiple-databases-from-a-single-tool.html</link>
<description><![CDATA[<b>Jun 12, 2019</b> by Robert Gravelle<br/><br/><p>Many database management and development tools support multiple connections to homogeneous databases, i.e., where they are all of the same type, ALL MySQL, ALL SQL Server, ALL Oracle, etc. On the other hand, very few support heterogeneous database servers, i.e. MySQL AND SQL Server AND Oracle, etc. Don't believe me? Just google it!</p><p>One of the few tools which does support heterogeneous database products is <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>. Moreover, it can connect simultaneously to MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, and SQLite databases from a single application. It is also compatible with most cloud databases, including Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud, Alibaba Cloud, Tencent Cloud, MongoDB Atlas and Huawei Cloud.</p><p>In today's tip, we'll learn how to set up multiple connections in Navicat Premium - one to a local MySQL instance and another to Microsoft Azure.</p><h1 class="blog-sub-title">Connecting to a Local Database Instance</h1><p>Navicat Premium supports a wide array of connection types for both local instances and cloud services. Moreover, connections may be established over secure protocols such as SSL and SSH. The latter, Secure Tunneling Protocol (SSH), is a good option in situations where your Internet Service Provider (ISP) does not provide direct access to its server; HTTP is another good choice.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190612/connection_list.jpg" style="max-width: 100%;"></td></tr><p>Here are the steps for connecting to a local MySQL instance:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Click Connection and select your server type. Then, enter the necessary information in the Connection dialog:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190612/new_connection.jpg" style="max-width: 100%;"></td></tr><br/><br/>Note that Navicat authorizes you to make connection to remote servers running on different platforms (i.e. Windows, macOS, Linux and UNIX), and supports PAM and GSSAPI authentication.<br/><br/>Later, you can edit the connection properties by right-clicking the connection and selecting Edit Connection.</li><br/><li>You can test your connection properties by clicking the <i>Test Connection</i> button.</li><br/><li>If you'd like to customize your database list, you may do so on the Databases tab:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190612/databases_tab.jpg" style="max-width: 100%;"></td></tr></li><br/><li>After closing the New Connection dialog, your new connection will appear in the Navigation Pane, on the left:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190612/mysql_connection.jpg" style="max-width: 100%;"></td></tr></li></ul><h1 class="blog-sub-title">Connecting to a Cloud Service</h1><p>The procedure for connecting to a cloud service is similar. Let's connect to Microsoft Azure SQL Database instance.</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 24px;"><li>Select Connection -&gt; Microsoft Azure -&gt; Microsoft Azure SQL Database... from the main menu.</li><li>Once again, enter the necessary information in the General tab of the Connection dialog:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190612/azure_connection.jpg" style="max-width: 100%;"></td></tr><br/><br/>You'll find the Host value under the Server name header on the Overview page of your Azure instance. There's even a button to copy the name to the clipboard!<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190612/azure_details.jpg" style="max-width: 100%;"></td></tr></li><br/><li>After closing the New Connection dialog, your new connection will appear in the Navigation Pane. You can open your connections by double-clicking them (I color-coded these to highlight them):<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190612/open_connections.jpg" style="max-width: 100%;"></td></tr></li></ul><h1 class="blog-sub-title">Conclusion</h1><p>In today's tip we learned how to set up multiple connections in Navicat Premium. Being able to connect to multiple database instances simultaneously offers many benefits, from being able to query multiple instances using the same SELECT statement, to easier migration. In fact, we explored <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1059-how-to-query-across-multiple-databases.html" target="_blank">How to Query across Multiple Databases</a> in the previous tip. Interested in trying out <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">Navicat Premium</a>? You can evaluate it for 14 days completely free of charge!</p>]]></description>
</item>
<item>
<title>How to Query across Multiple Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/1059-how-to-query-across-multiple-databases.html</link>
<description><![CDATA[<b>Jun 4, 2019</b> by Robert Gravelle<br/><br/><p>With Master-Slave topologies and modern practices such as Database sharding becoming increasingly ubiquitous, database administrators (DBAs) and developers are working with multiple databases more than ever before. Doing so is made a lot easier by software that can accommodate multiple database connections.</p><p>That's where <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> comes in. It's a database development, administration and management tool that allows you to simultaneously connect to MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, and SQLite databases. Navicat is also compatible with most cloud databases, including Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud, Alibaba Cloud, Tencent Cloud, MongoDB Atlas and Huawei Cloud.</p><p>In today's blog, we'll learn how to construct and execute a SELECT query that will fetch data from multiple databases using navicat Premium's SQL Editor.</p><h1 class="blog-sub-title">Setting up the Environment</h1><p>We'll be needing a couple of tables, each within their own database. As it happens, I've got a few copies of the Sakila Sample Database. I've created copies of the actors table and split its contents down the middle, so that names starting with A to L are in the first database and names starting with M to Z are in the other. That will allow us to combine the two groups of names into one result set. Here is their layout in the Navicat object pane:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190604/actor_tables.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Multiple Database SELECT Syntax</h1><p>Just as you can refer to a table within the default database as tbl_name in your SELECT statements, you can also prefix the table name with the database name, e.g. db_name.tbl_name, to specify a database explicitly. The database prefix can also be employed to combine different databases within one SELECT statement's table list, as specified after the FROM keyword. Hence, the following is valid SQL:</p><p><font face="monospace">SELECT database1.table1.field1,<br/>    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;database2.table1.field1<br/>    FROM database1.table1,<br/>    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;database.table1<br/>    WHERE database1.table1.age &gt; 12;</font></p><h1 class="blog-sub-title">Using Table JOINs</h1><p>You can JOIN tables just as you normally would; just be sure to fully qualify the table names by prepending the database name:</p><p><font face="monospace">SELECT *<br/>FROM database1.table1 T1<br/>JOIN database2.table1 AS T2 ON T1.id = T2.id</font></p><p>If you don't need to JOIN the tables on a common field, you can combine multiple SELECTs using the UNION operator:</p><p><font face="monospace">SELECT *<br/>&nbsp;FROM database1.table1 T1<br/>&nbsp;WHERE T1.age &gt; 12<br/>UNION<br/>SELECT *<br/>&nbsp;FROM database2.table1 T2<br/>&nbsp;WHERE T2.age &gt; 12;</font></p><p>Now that we know how to query two tables at a time, let's try out a similar query on our actors table. We'll SELECT actors whose IDs are between a certain range:</p><p><font face="monospace">SELECT T1.actor_id,<br/>    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;T1.first_name,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;T1.last_name<br/>&nbsp;FROM sakila.`actor_a-l` T1<br/>&nbsp;WHERE T1.actor_id BETWEEN 30 AND 50<br/>UNION<br/>SELECT T2.actor_id,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;T2.first_name,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;T2.last_name<br/>&nbsp;FROM sakila2.`actor_m-z` T2<br/>&nbsp;WHERE T2.actor_id BETWEEN 30 AND 50<br/>&nbsp;ORDER BY last_name;</font></p><p>You can be the results that there are actors who are stored in the A - L table, while some originate from the M - Z table:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190604/union_query_results.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to construct and execute a SELECT query to fetch data from multiple databases using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>'s SQL Editor. Navicat helps you code fast with Code Completion and customizable Code Snippets by getting suggestions for keywords and stripping the repetition from coding. You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes.</p>]]></description>
</item>
<item>
<title>The Between Operator</title>
<link>https://www.navicat.com/company/aboutus/blog/1058-the-between-operator.html</link>
<description><![CDATA[<b>May 29, 2019</b> by Robert Gravelle<br/><br/><p>The <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1057-some-select-queries-you-must-know.html" target="_blank">Some SELECT Queries You Must Know</a> blog presented a couple of the most important queries to know, along with some examples. Continuing with that theme, today's blog focuses on the invaluable BETWEEN operator.</p><h1 class="blog-sub-title">Limiting Values to a Certain Range</h1><p>One way to filter the number of rows returned from a query is to limit the values of one or more fields to those that fall within a range. Typically, this can be done using the &gt;= and &lt;= operators. To illustrate, here's a query that returns information about <a class="default-link" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila</a> film rentals that occurred between the 5th and 6th of July of 2005:</p><p><font face="monospace">SELECT<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;customer_list.`name`,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rental.rental_date,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;film.title<br/>FROM<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;customer_list<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;INNER JOIN rental ON customer_list.ID = rental.customer_id<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;INNER JOIN film ON rental.inventory_id = film.film_id<br/>WHERE<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rental_date &gt;= '2005-07-05'AND rental_date &lt;= '2005-07-06'<br/></font></p><p>A shorter and more readable way to delineate the same range is to use the BETWEEN operator. The BETWEEN operator is used to select the value within a certain range. The values defined as part of the BETWEEN range are inclusive i.e. the values that are mentioned in the range are included at the start and end values:</p><p><font face="monospace">WHERE rental_date BETWEEN '2005-07-05' AND '2005-07-06'</font></p><p>In both cases, the results are constrained to the given date range:</p><font face="monospace"><table><tr><td>name</td><td>rental_date</td><td>title</td></tr><tr><td colspan="3">----------------------------------------------------------------</td></tr><tr><td>JAIME NETTLES</td><td>2005-07-05 22:49:24</td><td>TEQUILA PAST</td></tr><tr><td>PAMELA BAKER</td><td>2005-07-05 22:56:33</td><td>STAR OPERATION</td></tr><tr><td>EDUARDO HIATT</td><td>2005-07-05 22:59:53</td><td>BRIDE INTRIGUE</td></tr><tr><td>FERNANDO CHURCHILL</td><td>2005-07-05 23:13:51</td><td>BLADE POLISH</td></tr><tr><td>CARMEN OWENS</td><td>2005-07-05 23:25:54</td><td>CANDLES GRAPES</td></tr><tr><td>JOE GILLILAND</td><td>2005-07-05 23:32:49</td><td>TOURIST PELICAN</td></tr><tr><td>APRIL BURNS</td><td>2005-07-05 23:44:37</td><td>WIZARD COLDBLOODED</td></tr><tr><td>ERICA MATTHEWS</td><td>2005-07-05 23:46:19</td><td>JACKET FRISCO</td></tr></table></font><p>While ideal for dates, the BETWEEN operator works equally well with other data types. Consider this further filtering of the above data that limits the results to those rentals that cost between 2.99 and 4.99:</p><font face="monospace"><p>SELECT<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;customer_list.`name`,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rental.rental_date,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;film.title,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;film.rental_rate<br/>FROM<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;customer_list<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;INNER JOIN rental ON customer_list.ID = rental.customer_id<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;INNER JOIN film ON rental.inventory_id = film.film_id<br/>WHERE<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rental.rental_date BETWEEN '2005-07-05' AND '2005-07-06'<br/> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AND film.rental_rate BETWEEN 2.99 AND 4.99<br/></p><table><tr><td>name</td><td>rental_date</td><td>title</td><td>rental_rate</td></tr><tr><td colspan="4">----------------------------------------------------------------------------------</td></tr><tr><td>JAIME NETTLES</td><td>2005-07-05 22:49:24</td><td>TEQUILA PAST</td><td>4.99</td></tr><tr><td>PAMELA BAKER</td><td>2005-07-05 22:56:33</td><td>STAR OPERATION</td><td>2.99</td></tr><tr><td>CARMEN OWENS</td><td>2005-07-05 23:25:54</td><td>CANDLES GRAPES</td><td>4.99</td></tr><tr><td>JOE GILLILAND</td><td>2005-07-05 23:32:49</td><td>TOURIST PELICAN</td><td>4.99</td></tr><tr><td>APRIL BURNS</td><td>2005-07-05 23:44:37</td><td>WIZARD COLDBLOODED</td><td>4.99</td></tr><tr><td>ERICA MATTHEWS</td><td>2005-07-05 23:46:19</td><td>JACKET FRISCO</td><td>2.99</td></tr></table></font><h1 class="blog-sub-title">Conclusion</h1><p>Today's blog presented the all-important BETWEEN operator, along with some examples using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> as the database client. Navicat helps you code fast with Code Completion and customizable Code Snippets by getting suggestions for keywords and stripping the repetition from coding. You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes.</p>]]></description>
</item>
<item>
<title>Some SELECT Queries You Must Know</title>
<link>https://www.navicat.com/company/aboutus/blog/1057-some-select-queries-you-must-know.html</link>
<description><![CDATA[<b>May 23, 2019</b> by Robert Gravelle<br/><br/><p>Data is a core part of many businesses both big and small. For example, Facebook stores each user's profile information, including data about their friends and posts within a database system. SQL (short for Structured Query Language) is the programming language that enables developers and database administrators to work with that data.</p><p>There are a few frequently used SQL commands you should be familiar with for database work. Not including Data Definition Languages (DDL) or Data Manipulation Language (DML) statements, SQL commands include those to fetch data from tables and views using the SELECT statement. Today's blog will present a couple of the most important queries to know, along with some examples using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> as the database client.</p><h1 class="blog-sub-title">Determining the Lowest/Highest Value for a Column</h1><p>The <a class="default-links" href="http://dev.mysql.com/doc/sakila/en/index.html" target="_blank">Sakila sample database</a> contains a number of tables themed around the film industry that cover everything from actors and film studios to video rental stores. The queries that we'll be building here today will run against it, so you may want to refer to the <a class="default-links" href="http://www.databasejournal.com/features/mysql/generating-reports-on-mysql-data.html" target="_blank">Generating Reports on MySQL Data</a> tutorial for instructions on downloading and installing the Sakila database.</p><p>One of the central tables in the Sakila database is the film table. It contains details about every film that our fictional video rental store owns. It includes information such as the film titles, release year, as well as the rental price:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190523/film_table.jpg" style="max-width: 100%;"></td></tr><p>Suppose that we wanted to know what the price range was - that is to say, the lowest and highest rental rates? We could easily find out using the MIN() and MAX() aggregate functions. An aggregate function performs a calculation on a set of values and returns a single value result. There are many aggregate functions, including AVG, COUNT, SUM, MIN, MAX, etc. Here's a query that applies MIN() and MAX() to the rental_rate field of the film table:</p><p><font face="monospace">SELECT MIN(f.rental_rate) as lowest_price,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MAX(f.rental_rate) as highest_price<br/>FROM film f;</font></p><p>As expected, each function returns a single value:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190523/lowest_highest_rental_price.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Grouping Results by Category</h1><p>One of the most powerful clauses in SQL is GROUP BY. It group rows that have the same values into summary rows. As such, the GROUP BY statement is often used with aggregate functions (COUNT, MAX, MIN, SUM, AVG) to group the result set by one or more columns.</p><p>We can use the GROUP BY clause to list the minimum and maximum rental cost for each movie rating - i.e. General, PG, PG-13, etc. All we need to do is add the rating field to the column list and append the GROUP BY clause to the end of our existing query:</p><p><font face="monospace">SELECT f.rating,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MIN(f.rental_rate) as lowest_price,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MAX(f.rental_rate) as highest_price<br/>FROM film f<br/>GROUP BY f.rating;</font></p><p>Our results show that each movie rating has films that range in price from $0.99 to $4.99:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190523/lowest_highest_rental_price_grouped_by_rating.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>Today's blog presented a couple of the most important queries to know, along with some examples using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> as the database client. Navicat helps you code fast with Code Completion and customizable Code Snippets by getting suggestions for keywords and stripping the repetition from coding. You can <a class="default-links" href="https://www.navicat.com/en/download/navicat-premium" target="_blank">try it</a> for 14 days completely free of charge for evaluation purposes.</p>]]></description>
</item>
<item>
<title>Diagnose Bottlenecks and/or Deadlocks in MySQL 8 using Navicat Monitor</title>
<link>https://www.navicat.com/company/aboutus/blog/1056-diagnose-bottlenecks-and-or-deadlocks-in-mysql-8-using-navicat-monitor.html</link>
<description><![CDATA[<b>May 16, 2019</b> by Robert Gravelle<br/><br/><p>In last week's <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1055-how-the-mysql-8-performance-schema-helps-diagnose-query-deadlocks.html" target="_blank">How the MySQL 8 Performance Schema Helps Diagnose Query Deadlocks</a> blog, we had a crash course on Mutexes and Threads, learned about the MySQL Performance Schema, and applied a few queries against it for investigating performance bottlenecks. In today's follow-up will present a different approach to bottlenecks and deadlock investigation using <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>.<h1 class="blog-sub-title">Navicat Monitor at a Glance</h1><p>Navicat Monitor is an agentless remote server monitoring tool for MySQL/MariaDB that is packed with features to make monitoring your database (DB) instances as effective and easy as possible. The term "agentless" is key because it describes a server-based architecture that does not require any software installation on the servers being monitored. Moreover, Navicat Monitor is accessible from anywhere via a web browser, thus providing you unhampered access to easily and seamlessly track your servers from anywhere in the world, at any time of day or night.</p><p>It boasts a whole host of features. Here are some of them, listed by screen:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 24px;"><li>Real-time Interactive Overview<br/><ul style="list-style-type: circle; margin-left: 28px; line-height: 20px;"><li>View all registered MySQL/MariaDB instances and availability groups on one central web-based interface</li><li>Monitor the live MySQL/MariaDB metrics, CPU, memory and swap usage on host machines</li><li>Explore historical metrics in an hour</li></ul></li><li>Instance Details<br/><ul style="list-style-type: circle; margin-left: 28px; line-height: 20px;"><li>Agentless remote monitoring</li><li>Generate reports for server performance metrics</li><li>Export reports to PDF files</li></ul></li><li>Alerts<br/><ul style="list-style-type: circle; margin-left: 28px; line-height: 20px;"><li>Come preconfigured with over 40 fully-customizable alert policies</li><li>Get helpful advice on how to improve server performances</li><li>Use SMTP, SMS, SNMP Trap and Slack with customizable thresholds</li></ul></li><li>Query Analyzer<br/><ul style="list-style-type: circle; margin-left: 28px; line-height: 20px;"><li>Analysing Slow Query Log and General Query Log</li><li>Find out queries having the biggest impact on your system</li><li>Store a history to diagnose deadlock problems</li></ul></li><li>Replication Monitoring<br/><ul style="list-style-type: circle; margin-left: 28px; line-height: 20px;"><li>Displays your Replication Topologies and enable you to quickly see the status of each replication</li><li>Replication error history for replication trouble shooting</li><li>Send alerts when any replication problems are detected</li></ul></li><li>Security Monitoring<br/><ul style="list-style-type: circle; margin-left: 28px; line-height: 20px;"><li>Control of access to your monitoring assets and features</li><li>Improve MySQL/MariaDB security by sending you alerts</li><li>Detect MySQL/MariaDB hacking activities</li></ul></li><li>User Management<br/><ul style="list-style-type: circle; margin-left: 28px; line-height: 20px;"><li>Role-based access control</li><li>User integration for OpenLDAP or Active Directory</li><li>Restrict login or role access by IP Address</li></ul></li><li>Configuration Export and Restore<br/><ul style="list-style-type: circle; margin-left: 28px; line-height: 20px;"><li>Save the most recent configuration and restore whenever you like</li><li>Migrate Navicat Monitor to new host</li><li>Allow Repository Database migration</li></ul></li></ul><h1 class="blog-sub-title">Spotting Deadlocked Queries</h1><p>The Query Analyzer screen shows the summary information of all executing queries and helps you identify problematic queries, such as top queries with cumulative execution time count, slow queries, and deadlocks caused by two or more queries blocking each other. You'll find the Latest Deadlocked Query in the top portion of the screen:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190516/query_analyzer.jpg" style="max-width: 100%;"></td></tr><p>You can view previous deadlocks by clicking the View All button. Doing so opens the Deadlock page. It displays all deadlocks detected on the selected instance:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190516/deadlock_screen.jpg" style="max-width: 100%;"></td></tr><p>All monitored instances are shown in the left pane. Selecting an instance brings up deadlocks for that instance. You can filter the list by providing a value in the "Search for a deadlock" text box.</p><p>By default, the deadlock list refreshes every 5 seconds automatically. You can change the auto-refresh time using the Refresh Time drop-down menu. To pause the auto refresh, click the Pause button:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190516/refresh_time_and_rows_ro_display.jpg" style="max-width: 100%;"></td></tr><p>You can also set the number of rows to display via the Rows to Display drop-down menu.</p><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to spot bottlenecks and/or deadlocks in MySQL 8 using Navicat Monitor. Thinking about purchasing Navicat Monitor for MySQL/MariaDB? It's now available via <a class="default-links" href="https://www.navicat.com/en/store/navicat-monitor-plan" target="_blank">monthly and yearly subscription</a>!</p>]]></description>
</item>
<item>
<title>How the MySQL 8 Performance Schema Helps Diagnose Query Deadlocks</title>
<link>https://www.navicat.com/company/aboutus/blog/1055-how-the-mysql-8-performance-schema-helps-diagnose-query-deadlocks.html</link>
<description><![CDATA[<b>May 7, 2019</b> by Robert Gravelle<br/><br/><p>MySQL 5.5 saw the addition of the performance_schema and information_schema databases. As we saw in <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1054-using-the-mysql-information-schema.html" target="_blank">last week's blog</a>, tables in information_schema contain statistical information about tables, plugins, partitions, processlist, status and global variables. As the name suggests, the tables of the performance_schema can be utilized to improve performance of our MySQL instances. Just how to do that will be the topic of today's blog. Just like last time, we'll be using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> to demo the various queries.</p><h1 class="blog-sub-title">A Brief Overview</h1><p>The Performance Schema is a tool for monitoring MySQL Server execution at a low level. The Performance Schema's storage engine shared the "performance_schema" name in order to easily distinguish it from other storage engines. Having its own engine allows us to access information about server execution while having minimal impact on server performance. Moreover, it uses views or temporary tables so as to minimize persistent disk storage. Finally, memory allocation is all done at server startup, so there is no further memory reallocation or sizing, which greatly streamlines performance.</p><p>The Performance Schema is enabled by default as of MySQL 5.6.6. Before that version, it was disabled by default. You can verify its status using this statement:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/show_variables.jpg" style="max-width: 100%"></td></tr><p>If you need to, you can always enable it explicitly by starting the server with the --performance-schema=ON flag.</p><p>Now let's get into some practical uses for the performance_schema.</p><h1 class="blog-sub-title">A Crash Course on Mutexes and Threads</h1><p>A mutex is a synchronization mechanism used in the code to enforce that only one thread at a given time can have access to some common resource. The resource is said to be "protected" by the mutex. The word "Mutex" is an informal abbreviation for "mutex variable", which is itself is short for "mutual exclusion". In MySQL, it's the low-level object that InnoDB uses to represent and enforce exclusive-access locks to internal in-memory data structures. Here's how it works:</p><p>Once the lock is acquired, any other process, thread, and so on is prevented from acquiring the same lock. In InnoDB, multiple threads of execution access shared data structures. InnoDB synchronizes these accesses with its own implementation of mutexes and read/write locks. When two threads executing in the server (for example, two user sessions executing a query simultaneously) need to access the same resource, such as a file, a buffer, or some piece of data, these two threads will compete against each other, so that the first query to obtain a lock on the mutex will cause the other query to wait until the first is done and unlocks the mutex. Should the first thread take a long time to complete, it can thus hold up other processes.</p><h1 class="blog-sub-title">Some Useful Queries</h1><p>All of the mutexes are listed in the mutex_instances table of the Performance Schema, which can be extremely helpful in investigating performance bottlenecks. The mutex_instances.LOCKED_BY_THREAD_ID and rwlock_instances.WRITE_LOCKED_BY_THREAD_ID columns are extremely important for investigating performance bottlenecks or deadlocks. Here's how to use them:</p><p>Suppose that thread 1 is stuck waiting for a mutex.</p><p>You can determine what the thread is waiting for:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/select_thread.jpg" style="max-width: 100%"></td></tr><p>Say the query result identifies that the thread is waiting for mutex A, found in events_waits_current.OBJECT_INSTANCE_BEGIN.</p><p>You can determine which thread is holding mutex A:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/select_mutex.jpg" style="max-width: 100%"></td></tr><p>Say the query result identifies that it is thread 2 holding mutex A, as found in mutex_instances.LOCKED_BY_THREAD_ID.</p><p>You can see what thread 2 is doing using this query:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/select_thread2.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to use the Performance Schema to diagnose bottlenecks and/or deadlocks in MySQL 8. An even easier way is to use <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>. It has a Query Analyzer that shows the summary information of all executing queries and lets you easily detecting deadlocks, such as when two or more queries permanently block each other. We'll explore that next time.</p>]]></description>
</item>
<item>
<title>How the MySQL 8 Performance Schema Helps Diagnose Query Deadlocks (2)</title>
<link>https://www.navicat.com/company/aboutus/blog/1103-how-the-mysql-8-performance-schema-helps-diagnose-query-deadlocks-2.html</link>
<description><![CDATA[<b>May 7, 2019</b> by Robert Gravelle<br/><br/><p>MySQL 5.5 saw the addition of the performance_schema and information_schema databases. As we saw in <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1054-using-the-mysql-information-schema.html" target="_blank">last week's blog</a>, tables in information_schema contain statistical information about tables, plugins, partitions, processlist, status and global variables. As the name suggests, the tables of the performance_schema can be utilized to improve performance of our MySQL instances. Just how to do that will be the topic of today's blog. Just like last time, we'll be using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> to demo the various queries.</p><h1 class="blog-sub-title">A Brief Overview</h1><p>The Performance Schema is a tool for monitoring MySQL Server execution at a low level. The Performance Schema's storage engine shared the "performance_schema" name in order to easily distinguish it from other storage engines. Having its own engine allows us to access information about server execution while having minimal impact on server performance. Moreover, it uses views or temporary tables so as to minimize persistent disk storage. Finally, memory allocation is all done at server startup, so there is no further memory reallocation or sizing, which greatly streamlines performance.</p><p>The Performance Schema is enabled by default as of MySQL 5.6.6. Before that version, it was disabled by default. You can verify its status using this statement:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/show_variables.jpg" style="max-width: 100%"></td></tr><p>If you need to, you can always enable it explicitly by starting the server with the --performance-schema=ON flag.</p><p>Now let's get into some practical uses for the performance_schema.</p><h1 class="blog-sub-title">A Crash Course on Mutexes and Threads</h1><p>A mutex is a synchronization mechanism used in the code to enforce that only one thread at a given time can have access to some common resource. The resource is said to be "protected" by the mutex. The word "Mutex" is an informal abbreviation for "mutex variable", which is itself is short for "mutual exclusion". In MySQL, it's the low-level object that InnoDB uses to represent and enforce exclusive-access locks to internal in-memory data structures. Here's how it works:</p><p>Once the lock is acquired, any other process, thread, and so on is prevented from acquiring the same lock. In InnoDB, multiple threads of execution access shared data structures. InnoDB synchronizes these accesses with its own implementation of mutexes and read/write locks. When two threads executing in the server (for example, two user sessions executing a query simultaneously) need to access the same resource, such as a file, a buffer, or some piece of data, these two threads will compete against each other, so that the first query to obtain a lock on the mutex will cause the other query to wait until the first is done and unlocks the mutex. Should the first thread take a long time to complete, it can thus hold up other processes.</p><h1 class="blog-sub-title">Some Useful Queries</h1><p>All of the mutexes are listed in the mutex_instances table of the Performance Schema, which can be extremely helpful in investigating performance bottlenecks. The mutex_instances.LOCKED_BY_THREAD_ID and rwlock_instances.WRITE_LOCKED_BY_THREAD_ID columns are extremely important for investigating performance bottlenecks or deadlocks. Here's how to use them:</p><p>Suppose that thread 1 is stuck waiting for a mutex.</p><p>You can determine what the thread is waiting for:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/select_thread.jpg" style="max-width: 100%"></td></tr><p>Say the query result identifies that the thread is waiting for mutex A, found in events_waits_current.OBJECT_INSTANCE_BEGIN.</p><p>You can determine which thread is holding mutex A:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/select_mutex.jpg" style="max-width: 100%"></td></tr><p>Say the query result identifies that it is thread 2 holding mutex A, as found in mutex_instances.LOCKED_BY_THREAD_ID.</p><p>You can see what thread 2 is doing using this query:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/select_thread2.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to use the Performance Schema to diagnose bottlenecks and/or deadlocks in MySQL 8. An even easier way is to use <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>. It has a Query Analyzer that shows the summary information of all executing queries and lets you easily detecting deadlocks, such as when two or more queries permanently block each other. We'll explore that next time.</p>]]></description>
</item>
<item>
<title>How the MySQL 8 Performance Schema Helps Diagnose Query Deadlocks (3)</title>
<link>https://www.navicat.com/company/aboutus/blog/1104-how-the-mysql-8-performance-schema-helps-diagnose-query-deadlocks-3.html</link>
<description><![CDATA[<b>May 7, 2019</b> by Robert Gravelle<br/><br/><p>MySQL 5.5 saw the addition of the performance_schema and information_schema databases. As we saw in <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1054-using-the-mysql-information-schema.html" target="_blank">last week's blog</a>, tables in information_schema contain statistical information about tables, plugins, partitions, processlist, status and global variables. As the name suggests, the tables of the performance_schema can be utilized to improve performance of our MySQL instances. Just how to do that will be the topic of today's blog. Just like last time, we'll be using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> to demo the various queries.</p><h1 class="blog-sub-title">A Brief Overview</h1><p>The Performance Schema is a tool for monitoring MySQL Server execution at a low level. The Performance Schema's storage engine shared the "performance_schema" name in order to easily distinguish it from other storage engines. Having its own engine allows us to access information about server execution while having minimal impact on server performance. Moreover, it uses views or temporary tables so as to minimize persistent disk storage. Finally, memory allocation is all done at server startup, so there is no further memory reallocation or sizing, which greatly streamlines performance.</p><p>The Performance Schema is enabled by default as of MySQL 5.6.6. Before that version, it was disabled by default. You can verify its status using this statement:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/show_variables.jpg" style="max-width: 100%"></td></tr><p>If you need to, you can always enable it explicitly by starting the server with the --performance-schema=ON flag.</p><p>Now let's get into some practical uses for the performance_schema.</p><h1 class="blog-sub-title">A Crash Course on Mutexes and Threads</h1><p>A mutex is a synchronization mechanism used in the code to enforce that only one thread at a given time can have access to some common resource. The resource is said to be "protected" by the mutex. The word "Mutex" is an informal abbreviation for "mutex variable", which is itself is short for "mutual exclusion". In MySQL, it's the low-level object that InnoDB uses to represent and enforce exclusive-access locks to internal in-memory data structures. Here's how it works:</p><p>Once the lock is acquired, any other process, thread, and so on is prevented from acquiring the same lock. In InnoDB, multiple threads of execution access shared data structures. InnoDB synchronizes these accesses with its own implementation of mutexes and read/write locks. When two threads executing in the server (for example, two user sessions executing a query simultaneously) need to access the same resource, such as a file, a buffer, or some piece of data, these two threads will compete against each other, so that the first query to obtain a lock on the mutex will cause the other query to wait until the first is done and unlocks the mutex. Should the first thread take a long time to complete, it can thus hold up other processes.</p><h1 class="blog-sub-title">Some Useful Queries</h1><p>All of the mutexes are listed in the mutex_instances table of the Performance Schema, which can be extremely helpful in investigating performance bottlenecks. The mutex_instances.LOCKED_BY_THREAD_ID and rwlock_instances.WRITE_LOCKED_BY_THREAD_ID columns are extremely important for investigating performance bottlenecks or deadlocks. Here's how to use them:</p><p>Suppose that thread 1 is stuck waiting for a mutex.</p><p>You can determine what the thread is waiting for:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/select_thread.jpg" style="max-width: 100%"></td></tr><p>Say the query result identifies that the thread is waiting for mutex A, found in events_waits_current.OBJECT_INSTANCE_BEGIN.</p><p>You can determine which thread is holding mutex A:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/select_mutex.jpg" style="max-width: 100%"></td></tr><p>Say the query result identifies that it is thread 2 holding mutex A, as found in mutex_instances.LOCKED_BY_THREAD_ID.</p><p>You can see what thread 2 is doing using this query:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190507/select_thread2.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to use the Performance Schema to diagnose bottlenecks and/or deadlocks in MySQL 8. An even easier way is to use <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a>. It has a Query Analyzer that shows the summary information of all executing queries and lets you easily detecting deadlocks, such as when two or more queries permanently block each other. We'll explore that next time.</p>]]></description>
</item>
<item>
<title>Using the MySQL Information Schema</title>
<link>https://www.navicat.com/company/aboutus/blog/1054-using-the-mysql-information-schema.html</link>
<description><![CDATA[<b>Apr 30, 2019</b> by Robert Gravelle<br/><br/><p>In relational databases, database metadata, such as information about the MySQL server, the name of a database or table, the data type of a column, or access privileges are stored in the data dictionary and/or system catalog. MySQL's provides database metadata in a special schema called INFORMATION_SCHEMA. There is one INFORMATION_SCHEMA within each MySQL instance. It contains several read-only tables that you can query to obtain the information that you are looking for. In today's blog, we'll explore a few practical uses for the INFORMATION_SCHEMA, as demonstrated using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_target">Navicat Premium</a>.</p><h1 class="blog-sub-title">Obtaining Table Information</h1><p>The information_schema.tables table contains metadata about, you guessed it, tables! Beside the table names, you can also retrieve their type (base table or view) and engine:</p><p><font face="monospace">SELECT table_name, table_type, engine<br/>FROM information_schema.tables<br/>WHERE table_schema = 'sakila'<br/>ORDER BY table_name;</font></p><p>Here is the above query and results in Navicat:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190430/table_info.jpg" style="max-width: 100%"></td></tr><p>You can also query information_schema.tables to get the size of a table:</p><p><font face="monospace">SELECT<br>    &nbsp;&nbsp;&nbsp;&nbsp;table_name AS `Table`,<br/>    &nbsp;&nbsp;&nbsp;&nbsp;round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`<br/>FROM information_schema.TABLES<br/>WHERE table_schema = "sakila"<br/>AND table_name = "film";</font></p><p>Here are the results in Navicat Premium:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190430/table_size_query.png" style="max-width: 100%"></td></tr><p>With a few tweaks, you can list the size of every table in every database:</p><p><font face="monospace">SELECT<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;table_schema as `Database`,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;table_name AS `Table`,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`<br/>FROM information_schema.TABLES<br/>ORDER BY (data_length + index_length) DESC;</font></p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190430/all_table_sizes.png" style="max-width: 100%"></td></tr><p>You can even use information_schema.tables to list the size of every database in a MySQL instance!</p><p><font face="monospace">SELECT<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;table_schema as `Database`,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;table_name AS `Table`,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`<br/>FROM information_schema.TABLES<br/>ORDER BY (data_length + index_length) DESC;</font></p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190430/database_sizes.png" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Viewing Table Statistics</h1><p>The INFORMATION_SCHEMA.STATISTICS table contains cached values. As such, these expire after 24 hours, by default. If there are no cached statistics or statistics have expired, statistics are retrieved from storage engines when querying table statistics columns.</p><p>One use for the INFORMATION_SCHEMA.STATISTICS table is to see indexes for all tables within a specific schema:</p><p><font face="monospace">SELECT DISTINCT<br/>&nbsp;&nbsp;&nbsp;&nbsp;TABLE_NAME,<br/>&nbsp;&nbsp;&nbsp;&nbsp;INDEX_NAME<br/>FROM INFORMATION_SCHEMA.STATISTICS<br/>WHERE TABLE_SCHEMA = 'your_schema';</font></p><p>Here are the results in Navicat for the sakila database:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190430/indexes.jpg" style="max-width: 100%"></td></tr><p>You can view all indexes in all schemas by simply removing the where clause. In that case you may want to add the database name as well:</p><p><font face="monospace">SELECT DISTINCT<br/>&nbsp;&nbsp;&nbsp;&nbsp;stat.TABLE_SCHEMA as 'DATABASE',<br/>&nbsp;&nbsp;&nbsp;&nbsp;TABLE_NAME,<br/>&nbsp;&nbsp;&nbsp;&nbsp;INDEX_NAME<br/>FROM INFORMATION_SCHEMA.STATISTICS stat;</font></p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190430/indexes_for_all_dbs.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned just a few of the many ways to use the MySQL INFORMATION_SCHEMA to obtain metadata information about a variety of objects within an MySQL instance, form the databases to tables, columns, indexes, and more. Although queries were run in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>, <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> will work just as well! Try each of them for yourself; both come with a 14 day free trial!</p>]]></description>
</item>
<item>
<title>A Few MySQL Tips and Tricks</title>
<link>https://www.navicat.com/company/aboutus/blog/1051-a-few-mysql-tips-and-tricks.html</link>
<description><![CDATA[<b>Apr 23, 2019</b> by Robert Gravelle<br/><br/><p>If you work regularly with MySQL or MariaDB, then you will probably find <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium</a> or <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a> to be indispensable. In addition to MySQL and MariaDB, Navicat for MySQL also supports a number of cloud services, including Amazon RDS, Amazon Aurora, Oracle Cloud, Google Cloud, Microsoft Azure, Alibaba Cloud, Tencent Cloud and Huawei Cloud. Navicat Premium is a database development tool that allows you to simultaneously connect to MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, and SQLite databases from a single application, and is also compatible with cloud databases. Both help you create views, queries and functions using an easy-to-use GUI interface. Moreover, you can save your work to the Cloud for reuse and collaborating with team members.</p><p>In today's blog, I'll be sharing a few tips and tricks for MySQL that you can apply using either Navicat for MySQL or Navicat Premium.</p><h1 class="blog-sub-title">1: Retrieve Unique Values from a Single Column</h1><p>Suppose you have a database filled with thousands of employee records and youd like to know how many unique employee last names there are within the thousands of rows. We can create a SELECT DISTINCT query that will do that:</p><p><font face="monospace">SELECT DISTINCT<br/>&nbsp;&nbsp;&nbsp;&nbsp;lastname<br/>FROM<br/>&nbsp;&nbsp;&nbsp;&nbsp;employees<br/>ORDER BY lastname;<br/></font></p><p>Rather than execute the above query every time that we wanted to see the distict employees we could create a view that we can execute queries against:</p><p><font face="monospace">CREATE VIEW distinct_emp_names AS<br/>SELECT DISTINCT<br/>&nbsp;&nbsp;&nbsp;&nbsp;lastname<br/>FROM<br/>&nbsp;&nbsp;&nbsp;&nbsp;employees<br/>ORDER BY lastname;<br/></font></p><p>Here are the results:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190423/MySQL-DISTINCT-last-name.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">2: Retrieve Unique Data from Multiple Columns</h1><p>The DISTINCT clause also works with more than one column. In this case, MySQL relies on the combination of values in these columns to determine their uniqueness in the result set. For example, to get the unique combination of city and state from a table, you can create the following view:</p><p><font face="monospace">CREATE VIEW distinct_cities_and_states AS<br/>SELECT DISTINCT<br/>&nbsp;&nbsp;&nbsp;&nbsp;state, city<br/>FROM<br/>&nbsp;&nbsp;&nbsp;&nbsp;customers<br/>WHERE<br/>&nbsp;&nbsp;&nbsp;&nbsp;state IS NOT NULL<br/>ORDER BY state, city;<br/></font></p><p>Here are the results of the view:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190423/MySQL-DISTINCT-multiple-columns-example.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">3: Modify a Column Name</h1><p>Suppose you just want to change the name of a column, you could run an ALTER TABLE statement to do so.</p><font face="monospace"><p>ALTER TABLE MyTable CHANGE COLUMN `Old Name` to `New Name`;</p></font><p>In Navicat, if you right-click a field in the Table Designer, you can choose to add, insert, delete and, of course, rename the field:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190423/MySQL_Windows_02_ObjectDesign.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">4: Split a Full Name into First and Last Names</h1><p>There is often a need to split a column that contains a full name (i.e. full_name) into two columns, such as first_name and last_name. Here's how using the ALTER TABLE statement:</p><p><font face="monospace">ALTER TABLE emails<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ADD COLUMN `first_name` VARCHAR(30) AFTER `full_name`,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ADD COLUMN `last_name` VARCHAR(30) AFTER `first_name`;<br/>UPDATE emails<br/>SET<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# Trim the white space<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`full_name` = LTRIM(RTRIM(`full_name`)),<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# Get the first name and copy it to a new column<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`first_name` = SUBSTRING_INDEX(`full_name`, ' ', 1),<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# Get the second name and copy it to a new column<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`last_name` = SUBSTRING_INDEX(`full_name`, ' ', -1)<br/></font></p><p>Here's is the above statement as is appears in the Navicat Query Editor:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190423/navicat-view-split-full-name-into-fname-lname.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog we learned a few tips and tricks for MySQL that we can apply using either <a class="default-links" href="https://www.navicat.com/products/navicat-premium" target="_blank">Navicat Premium</a> or <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mysql" target="_blank">Navicat for MySQL</a>. Navicat database management tools make most DBA and development tasks a lot easier to carry out. Try them for yourself; both come with a 14 day free trial!</p>]]></description>
</item>
<item>
<title>Understanding Views in Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/1046-understanding-views-in-relational-databases.html</link>
<description><![CDATA[<b>Apr 16, 2019</b> by Robert Gravelle<br/><br/><p>A database view is a virtual or logical table which is comprised of a SELECT query. Much like a database table, a view also consists of rows and columns that you can query against. Most database management systems, including MySQL, even allow you to update data in the underlying tables through the view, but with some caveats. In today's blog, we'll learn what a view is and how to create one for MySQL 8 using Navicat Premium as our client.</p><h1 class="blog-sub-title">Basic Syntax</h1><p>In MySQL, you use the CREATE VIEW statement to create a new view. Here is the basic syntax:</p><font face="monospace"><p>CREATE<br/>&nbsp;&nbsp;&nbsp;[ALGORITHM = {MERGE  | TEMPTABLE | UNDEFINED}]<br/>VIEW view_name [(column_list)]<br/>AS<br/>select-statement;</p></font><p>Now, let's examine the syntax in more detail.</p><p><b>View Processing Algorithms</b></p><p>The ALGORITHM attribute tells MySQL which mechanism to use when creating the view. MySQL provides three algorithms: MERGE, TEMPTABLE, and UNDEFINED:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>The MERGE algorithm combines the input query with the SELECT statement, which defines the view, into a single query. MySQL then executes the combined query to return the merged result set. The MERGE algorithm cannot be applied to SELECT statements that contain aggregate functions such as MIN, MAX, SUM, COUNT, AVG or DISTINCT, GROUP BY, HAVING, LIMIT, UNION, and UNION ALL. If the MERGE algorithm cannot be applied, MySQL automatically changes the algorithm to UNDEFINED.</li><li>The TEMPTABLE algorithm first creates a temporary table based on the SELECT statement that defines the view, and then it executes the input query against this temporary table. Because MySQL has to create a temporary table to store the result set and moves the data from the base tables to the temporary table, the TEMPTABLE algorithm is less efficient than the MERGE algorithm.</li><li>UNDEFINED is the default algorithm when you create a view without specifying an explicit algorithm. The UNDEFINED algorithm lets MySQL make a choice of using MERGE or TEMPTABLE algorithm. MySQL chooses the MERGE algorithm first, due to its greater efficiency, but falls back to the TEMPTABLE algorithm if MERGE cannot be employed.</li></ul><p><b>View Name</b></p><p>You can choose whatever name you wish for your view, so long as you follow the same naming rules as for tables. Moreover, views and tables share the same namespace within the database, so you can't give your view the same name as any existing table of view.</p><p><b>SELECT Statement</b></p><p>In the SELECT statement you can query data from any table or view that exists in the database. However, there are a few rules that the SELECT statement adhere to:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>The SELECT statement may contain a subquery in the WHERE clause but not in the FROM clause.</li><li>The SELECT statement cannot refer to any variables including local variables, user variables, and session variables.</li><li>The SELECT statement cannot refer to the parameters of prepared statements.</li></ul><h1 class="blog-sub-title">Creating a View in Navicat</h1><p>In Navicat, you can create a new view by clicking the View button on the main toolbar and then clicking "New view" on the Objects toolbar:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190416/new_view_button.jpg" style="max-width: 100%;"></td></tr><p>The Definition tab is where you write your SQL. You can even use the View Builder to help write your statement!</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190416/view_definition_tab.jpg" style="max-width: 100%"></td></tr><p>The Algorithm can be found on the Advanced tab, along with a few other options:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190416/view_advanced_tab.jpg" style="max-width: 100%"></td></tr><p>Once you're done, you can test your View using the Preview button and then save it by clicking on Save.</p><h1 class="blog-sub-title">Conclusion</h1><p>Views are a great way to combine data from one or more tables in a format that you can query, but keep in mind that there are some disadvantages of using database views. For one, querying data from a database view can be slow - especially if the view is created based on other views. Also, you have to remember to change the view whenever you change the structure of a table that your view refers to.</p>]]></description>
</item>
<item>
<title>Understanding Stored Procedures and Functions in Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/1012-understanding-stored-procedures-and-functions-in-relational-databases.html</link>
<description><![CDATA[<b>Apr 9, 2019</b> by Robert Gravelle<br/><br/><p>Most relational databases - including MySQL, MariaDB, and SQL Server - support stored procedures and functions. Stored procedures and functions are actually very similar, and can in fact be utilized to accomplish the same task. That being said, there are some crucial differences between the two that need to be considered when deciding which to use for a given job. We'll go over these in today's blog</p><h1 class="blog-sub-title">Stored Procedures</h1><p>A stored procedure - or "proc" for short - is a set of Structured Query Language (SQL) statements with an assigned name, which are stored in a relational database management system as a group, so it can be reused and shared by multiple programs. Stored procedures can access or modify data in a database, but it is not tied to a specific database or object. This loose coupling is advantageous because it's easy to reappropriate a proc for a different but similar purpose.</p><p>Stored procedures can accept input parameters and return multiple values of output parameters; moreover, stored procedures can program statements to perform operations in the database and return a status value to a calling procedure or batch.</p><p>Finally, stored procedures can execute multiple SQL statements, call functions, and even iterate over results sets, performing complex operations akin to programming code. When completed, the proc typically returns one of more result sets to the calling application.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190409/proc.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">User Functions</h1><p>A function is similar to a stored procedure in that it contains a set of SQL statements that perform a specific task. The idea behind Functions is to foster code reusability. If you have to repeatedly write large SQL scripts to perform the same task, you can create a function that performs that task so that, next time, instead of rewriting the SQL, you can simply call that function. Databases typically include a set of built-in functions that perform a variety of tasks, so always take a look at these before writing your own.</p><p>A function accepts inputs in the form of parameters and returns a value. Unlike a stored procedure, a function cannot return a result set. Moreover, functions cannot modify the server environment or operating system environment.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190409/func.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Main Differences</h1><p>While both procs and functions can be employed in similar ways, functions are designed to send their output to a query or SQL statement. Meanwhile, stored procedures are designed to return outputs (i.e. one or more result sets) to the application.</p><p>Another difference is that you can group a set of SQL statements and execute them within a stored procedure, stored procedures cannot be called within SQL statements. Functions, on the other hand, may be invoked directly from your queries and/or stored procedures.</p><p>Finally, a limitation of functions is that they have to be called for each row. Therefore, if you are using functions with large data sets, you can encounter performance issues.</p><h1 class="blog-sub-title">Viewing Stored Procedures and Functions in Navicat</h1><p>In Navicat database management and development tools, you'll see both procs and functions under "Functions". The stored procedures have the "Px" prefix, while functions have an "fx":</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190409/functions.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>Stored procedures and functions are very similar in many ways, but each serve a different purpose. You can think of a stored proc as a grouping of SQL statements, while a function takes input and returns an output value based on the input parameters.</p>]]></description>
</item>
<item>
<title>Performing Regular Expression Searches in MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/1011-performing-regular-expression-searches-in-mongodb.html</link>
<description><![CDATA[<b>Apr 1, 2019</b> by Robert Gravelle<br/><br/><p>Regular expressions (regex) provide a way to match strings against a pattern so that your searches are "fuzzy" rather than exact. MongoDB comes with a regex engine built in so you can dig up documents even if you don't know exactly what the exact Field value is that you're looking for. In today's blog we'll learn how to use regexes in MongoDB, using <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mongodb" target="_blank">Navicat for MongoDB</a>.</p><h1 class="blog-sub-title">Basic Syntax</h1><p>MongoDB provides the regex operator for searching strings in the collection. The following example shows how it's done, using the Sakila Sample Database:</p><p>Let's say that we wanted to find movies with actors named "DAN", "DANNY", or even "DANIEL". Here's a statement to do that:</p><font face="monospace"><p>db.film_list.find({actors: {$regex: "DAN" }})</p></font><p>Once the command is executed successfully, the following Output will be shown:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190401/dan_results.jpg" style="max-width: 100%;"></td></tr><p>We can simplify the statement by removing the "$regex:" qualifier and enclosing the search string within forward slashes (/) instead of quotes, as forward slashes denote a regex:</p><font face="monospace"><p>db.film_list.find({ actors: /DAN/ })</p></font><h1 class="blog-sub-title">Searching with Multiple Search Strings</h1><p>We can include more than on search string to match various combinations. Let's say that we wanted to find movies with Carrie-Anne Moss by matching "Carrie Moss" or "moss carrie-anne". Here's a statement to do that:</p><font face="monospace"><p>db.film_list.find(<br/>   &nbsp;&nbsp;&nbsp;{ actors: { $elemMatch: { actors: /Moss/i, actors: /carrie-anne/i } } }<br/>   );</p></font><p>$elemMatch will return those records, where an array element matches both criterias. In contrast, using a plain $and (which is the default for a list of criterias) without $elemMatch would return movies with "Carrie-Anne Moss", but also those where "Sandra Moss" and "Carrie-Anne Fisher" are featured. That would be more of a superset of the information we want to retrieve. Also note the "i" flag; it makes the regex case-insensitive. It's useful for searches that are entered by the user, because we can't rely on the user to use mixed case.</p><h1 class="blog-sub-title">The options Parameter</h1><p>We can also provide additional instructions to our regexes via the options parameter.</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>i: Case insensitivity to match upper and lower cases.</li><li>m: For patterns that include anchors (i.e. ^ for the start, $ for the end), match at the beginning or end of each line for strings with multiline values. Without this option, these anchors match at beginning or end of the string.</li><li>x: Extended capability to ignore all white space characters in the $regex pattern unless escaped or included in a character class. Unlike other flags, this one requires $regex with $options syntax</li><li>s: Allows the dot character (.) to match all characters including newline characters.</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>The $regex operator provides an easy means of pattern matching in MongoDB. For best results, make sure that the document fields that you are searching are indexed. That way, the query will use make use of indexed values to match the regular expression. This makes the search very fast as compared to the regular expression scanning the whole collection.</p><p>If you'd like to learn more about Navicat for MongoDB, please visit the <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">product page</a>. Do you work with many database types? <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 12.1</a> also supports MongoDB!</p>]]></description>
</item>
<item>
<title>All about MongoDB's _id Field</title>
<link>https://www.navicat.com/company/aboutus/blog/1010-all-about-mongodb-s-_id-field.html</link>
<description><![CDATA[<b>Mar 26, 2019</b> by Robert Gravelle<br/><br/><p>Open up any document in a MongoDB database and you'll notice an _id field:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190305/film_table_record.jpg" style="max-width: 100%;"></td></tr><br/><p>In fact, the ObjectId/_id is the only field that exists across every MongoDB document. In today's blog, we'll explore what it is and why it's important to your MongoDB database.</p><h1 class="blog-sub-title">The Structure of ObjectId</h1><p>As a quick, opening summary, these are a few of _id's principal characteristics:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>_id is the primary key on documents in a collection; with it, documents (records) can be differentiated from each one another.</li><li>_id is automatically indexed. Lookups specifying { _id: &lt;someval&gt; } refer to the _id index as their guide.</li><li>By default the _id field is of type ObjectID, one of MongoDB's BSON types. Users can also override _id to something other than an ObjectID, if desired.</li></ul><p>ObjectIDs are 12 bytes long, composed of several 2-4 byte chains. Each chain represents and designates a specific aspect of the document's identity. The following values make up the full 12 byte combination:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>a 4-byte value representing the seconds since the Unix epoch</li><li>a 3-byte machine identifier</li><li>a 2-byte process id</li><li>a 3-byte counter, starting with a random value</li></ul><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190326/_id.png" style="max-width: 100%;"></td></tr><p>Typically, you don't have to concern yourself with generating the ObjectID.  If a document has not been assigned an _id value, MongoDB will automatically generate one.</p><h1 class="blog-sub-title">Creating New ObjectId</h1><p>If you want to generate a new ObjectId yourself, you can use the following code:</p><font face="monospace">newObjectId = ObjectId()</font><p>You can also type it directly into the Navicat editor.</p><p>That will generate a unique _id such as:</p><font face="monospace">ObjectId("5349b4ddd2781d08c09890f3")</font><p>Alternatively, you can provide a 12-byte id:</p><font face="monospace">myObjectId = ObjectId("5349b4ddd2781d08c09890f4")</font><h1 class="blog-sub-title">Creating Timestamp of a Document</h1><p>Since the _id ObjectId by default stores the 4-byte timestamp, in most cases you do not need to store the creation time of any document. You can fetch the creation time of a document using getTimestamp method:</p><font face="monospace">ObjectId("5349b4ddd2781d08c09890f4").getTimestamp()</font><p>This will return the creation time of this document in ISO date format</p><font face="monospace">ISODate("2019-09-12T30:39:17Z")</font><h1 class="blog-sub-title">Converting ObjectId to String</h1><p>In some cases, you may need the value of ObjectId in a string format. To convert the ObjectId in string, use the following code:</p><font face="monospace">newObjectId.str</font><p>The above code will return the string format of the Guid:</p><font face="monospace">5349b4ddd2781d08c09890f3</font><h1 class="blog-sub-title">Document Sorting</h1><p>Since each ObjectId contains a timestamp, you can sort your documents by _id to by create time. Be sure to note, however, that this sorting method does not represent a strict or exact ordering, because other components of the ID can come into play, causing the order to reflect other variables than just creation time.</p><h1 class="blog-sub-title">Changing the ObjectId</h1><p>The _id field is basically immutable so that, after a document is created, it has, by definition, been assigned an _id, which cannot be changed. Having said that, the _id can be overridden when you insert new documents. Overriding the _id field for a document can be useful, but when you do so, you are responsible for ensuring that the values for each document are unique.</p><h1 class="blog-sub-title">Conclusion</h1><p>MongoDB's _id field plays a vital role in every MongoDB collection. Therefore, understanding how it's created as well as when to override it can be useful for managing your collections.</p><p>If you'd like to learn more about Navicat for MongoDB, please visit the <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">product page</a>. Do you work with many database types? <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 12.1</a> also supports MongoDB!</p>]]></description>
</item>
<item>
<title>Using Covered Queries in MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/1006-using-covered-queries-in-mongodb.html</link>
<description><![CDATA[<b>Mar 5, 2019</b> by Robert Gravelle<br/><br/><p>You've probably heard that column indexing is a great way to optimize query performance by minimizing the number of disk accesses required by the query. MongoDB has a specific application of field indexing called Covered Queries, where all of a query's columns are indexed. Covered Queries are very fast because MongoDB doesn't have to examine any documents apart from the indexed ones. In today's blog, we'll be learning how to use Covered Queries to query data faster.</p><h1 class="blog-sub-title">Covered Queries Defined</h1><p>In the intro paragraph, we alluded to how all of a covered query's columns are indexed. There's slightly more to it than that. Specifically, a covered query is a query in which:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>All the fields in the query are part of an index.</li><li>All the fields returned in the query are in the same index.</li></ul><p>Behind the scenes, MongoDB matches the query conditions and returns the result using the same index without actually looking inside the documents. Since indexes are present in RAM, fetching data from indexes is much faster as compared to fetching data by scanning documents.</p><p>Now that we know exactly what constitutes a covered query, let's write some!</p><h1 class="blog-sub-title">Creating the Indexes</h1><p>We'll run our query against the film table of the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a>. It contains a number of fields pertaining to fictional movies. These include the title, a description, release year, as well as rental information such as the price and rental duration. Here's a document in <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">Navicat for MongoDB</a>'s Tree View:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190305/film_table_record.jpg" style="max-width: 100%;"></td></tr><p>Let's create a compound index on the title and release_year fields:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Click the large Index button on the main toolbar followed by the New Index button on the Objects toolbar:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190305/new_index_button.jpg" style="max-width: 100%;"></td></tr><br/><br/>On the General tab,<br/><br/></li><li>Select "film" from the Collection Name drop-down list.</li><li>Under the "Index Version:" header, select "title" from the Field drop-down and choose "ASC" from the Type drop-down.</li><li>Then, click the plus (+) button at the bottom of the screen to add a second field:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190305/index_keys.jpg" style="max-width: 100%;"></td></tr><br/></li><li>Select "release_year" from the Field drop-down and once again choose "ASC" from the Type drop-down.</li><li>Now, click on Text tab, and, under the "Weights" header, follow the same process as above to select the two fields from the Field drop-down and assign a weight of 1 for both fields:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190305/weights.jpg" style="max-width: 100%;"></td></tr><br/><br/></li><li>Finally, click the Save button, and give the index a name of "film_title_year".</li></ul><h1 class="blog-sub-title">Executing the Covered Query</h1><p>To execute a query against our indexed fields:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Click the large Query button on the main toolbar followed by the New Query button on the Objects toolbar.</li><li>In the query editor, type the following find() invocation:<br/><br><font face="monospace">db.film.find({title:{$regex : ".*AGENT.*"}},{title:1,release_year:1,_id:0})</font><br/><br/></li><li>Click the Run button to execute the query. Here are the results:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190305/query_results.jpg" style="max-width: 100%;"></td></tr><br/></li></ul><p>You can choose Query &gt; Explain from the main menu to see the execution stats on the query:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190305/explain.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>If you're looking to give your queries a boost, consider using Covered Queries. They are very fast because MongoDB only has to examine indexed documents, which are present in RAM.</p>]]></description>
</item>
<item>
<title>Performing Database-wide Searches in Navicat Premium</title>
<link>https://www.navicat.com/company/aboutus/blog/1005-performing-database-wide-searches-in-navicat-premium.html</link>
<description><![CDATA[<b>Feb 26, 2019</b> by Robert Gravelle<br/><br/><p>If you've ever tried to locate a specific column in a large database, I'm sure that you'd agree that it can be a painstaking task. You can glean a lot of information about the DB structure from the information_schema schema. It has a list of all tables and all fields that are in a table. You can then run queries using the information that you have gotten from this table. The specific tables involved are SCHEMATA, TABLES and COLUMNS. There are foreign keys such that you can build up exactly how the tables are created in a schema.</p><p>However, an easier way to perform a database-wide search is to use Navicat Premium. Available in Non-Essentials Edition, Navicat provides a Find in Database/Schema feature for finding data within tables/views or object structures within a database and/or schema. In today's blog, we'll learn how to use it.</p><h1 class="blog-sub-title">Locating a Column</h1><p>Let's start by finding a column within our database. We'd like to find the "release_year" column within the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="_blank">Sakila Sample Database</a>. Here's how we would go about it:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Open the Find in Database/Schema window; choose Tools -> Find in Database/Schema from the menu bar.</li><li>Select a target Connection, Database and/or Schema.</li><li>Enter the search string in the "Find what" text box.</li><li>Choose the "Structure" item in the "Look in" drop-down list. The other option is of course "Data".</li><li>Choose the "Search Mode". Choices include Contains, Whole Word, Prefix or Regular Expression.</li><li>Check the "Case Insensitive" box to disable case sensitive search.</li><li>Since we selected "Structure" in the "Look in" drop-down list, we can now choose to search different objects, including Tables, Views, Functions, Queries, and/or Events.<br/><br/>Here is what the form should look like with all of the fields filled in and/or selected:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190226/find_column.jpg" style="max-width: 100%;"></td></tr><br/></li><li>Now, go ahead and click the Find button to obtain the results. In this case Navicat matched the "release_year" column in one table:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190226/column_results.jpg" style="max-width: 100%;"></td></tr><br/><br/>You can double-click an object in the Find Results list to view the record or the structure. It'll be highlighted:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190226/object_in_table.jpg" style="max-width: 100%;"></td></tr></li></ul><h1 class="blog-sub-title">Searching for Data</h1><p>Trying to find a given value within the entire database without a search tool is scarcely worth the trouble. In Navicat, all we need to do is follow the same process as above, except that now we'll select "Data" from the "Look in" drop-down.</p><p>Here's the results for a "Find what" value of "JOHN" with "Prefix" selected from the "Search Mode" drop-down:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190226/data_results.jpg" style="max-width: 100%;"></td></tr><p>As you can see, this more general search resulted in more matches.</p><p>Again, double-clicking an object in the Find Results list displays the record(s) in a new tab:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190226/data_in_table.jpg" style="max-width: 100%;"></td></tr><p>Notice the query that Navicat generated to fetch the desired results.</p><h1 class="blog-sub-title">Conclusion</h1><p>Navicat's Find in Database/Schema tool greatly facilitates finding data or object structures within an entire database or schema. Compared with the alternative of information_schema schema, there is really no contest. You can learn more about Navicat Premium's features on the <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">product page</a>.</p>]]></description>
</item>
<item>
<title>Create DBRefs in MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/1004-create-dbrefs-in-mongodb.html</link>
<description><![CDATA[<b>Feb 19, 2019</b> by Robert Gravelle<br/><br/><p>In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/1003-relationships-in-mongodb.html" target="_blank">last week's blog</a>, we explored the pros and cons of document relationship modeling via Embedded and Referenced approaches in MongoDB. We then gained some valuable experience with each by creating both an Embedded and Referenced relationship. Today, we'll learn how to create DBRefs in MongoDB.</p><h1 class="blog-sub-title">Comparing DBRefs to Referenced Relationships</h1><p>As we saw in last week's blog on MongoDB relationships, we can implement a normalized database structure in MongoDB by creating a Referenced Relationship. Referenced Relationships are also often referred to as Manual References because we <i>manually</i> store the referenced document's id (or entire document) inside the other document. In cases where a document contains references from different collections, we can use MongoDB DBRefs.</p><p>As an example scenario, where we would use DBRefs instead of manual references, let's consider a scenario where we are storing different types of addresses (home, office, mailing, etc.) in different collections (address_home, address_office, address_mailing, etc). Now, when a user collection's document references an address, it also needs to specify which collection to look up based on the address type. Whenever a document needs to reference documents from many collections, we should use DBRefs.</p><h1 class="blog-sub-title">DBRefs in Action</h1><p>DBRefs are made up of three fields:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>$ref: This field specifies the collection of the referenced document.</li><li>$id: This field specifies the _id field of the referenced document.</li><li>$db: This is an optional field and contains the name of the database in which the referenced document lies.</li></ul><p>Let's modify Barbara Palmer's address information from last week's blog by removing the embedded fields and replacing them with DBRefs.</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Open her information in the Navicat Editor in "Tree View". Expand the first address document and click on the plus (+) sign below the address fields to insert the three DBRef fields above:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190219/employees_collection.jpg" style="max-width: 100%;"></td></tr></li><li>The DBRef requires the address document's $id field. You can located it in the addresses collection. Just copy it form the "135 Sycamore Dr." address document:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190219/addresses_document.jpg" style="max-width: 100%;"></td></tr></li><li>Once you're done adding the new fields, be sure to delete all of the existing address information from the document:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190219/delete_value.jpg" style="max-width: 100%;"></td></tr></li><li>Here's the completed address_home DBRef:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190219/employee_with_DBRef.jpg" style="max-width: 100%;"></td></tr></li><li>Follow the same procedure for Barbara Palmer's second address, except this time, assign it a $ref of "address_work".</li></ul><h1 class="blog-sub-title">Using the New DBRefs</h1><p>Now that we've updated our an employee document to use DBRefs, we will have to alter how we go about fetching the data from the employees collection. Here's some code that dynamically looks in the collection specified by $ref parameter (address_home in our case) for a document with the id specified by the DBRef $id parameter:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190219/query.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we learned how to create DBRefs in MongoDB using Navicat for MongoDB. DBRefs are great for linking documents located in multiple collections with documents from a single collection. Having said that, if you don't have documents referred to in multiple collections, I would recommend sticking with manual references.</p><p>If you'd like to learn more about Navicat for MongoDB, please visit the <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">product page</a>. Do you work with many database types? <a class="default-links" href="https://navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium 12.1</a> also supports MongoDB!</p>]]></description>
</item>
<item>
<title>Relationships in MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/1003-relationships-in-mongodb.html</link>
<description><![CDATA[<b>Feb 13, 2019</b> by Robert Gravelle<br/><br/><p>As the name implies, Relational Databases (RDBMSes) maintain relationships between the tables to organize the data together in meaningful ways. Document databases such as MongoDB are sometimes called "schema-less" due to the fact that they don't really enforce relationships like RDBMSes do. However, while document databases don't require the same predefined structure as a relational database, that doesn't mean that they don't support it. In fact, MongoDB allows relationships between documents to be modeled via Embedded and Referenced approaches. In today's blog, we'll give each a try using <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">Navicat for MongoDB</a>.</p><h1 class="blog-sub-title">The Test Case</h1><p>As an example, we'll consider the use case of the ACME corporation. They need to store the addresses in such a way as to link them to employees. One employee can have multiple addresses, making this a one-to-many (1:N) relationship. That's no problem, as relationships in MongoDB can be any of one-to-one (1:1), one-to-many (1:N), many-to-one (N:1) or many-to-many (N:N), just as in relational databases.</p><p>Here is the document structure of the employees document in Navicat's JSON view:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190213/employees_document.jpg" style="max-width: 100%;"></td></tr><p>And here are is the addresses document:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190213/addresses_document.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Creating Embedded Relationships</h1><p>Using the embedded approach, we would embed the addresses document directly inside the employees document. We can easily do that in Navicat for MongoDB as follows:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Open the addresses collection in JSON view and copy the last two documents:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190213/copy_addresses.jpg" style="max-width: 100%;"></td></tr></li><li>Switch to the employees collection and edit the first document:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190213/edit_employee.jpg" style="max-width: 100%;"></td></tr></li><li>Paste the addresses into the employee document that you want to associate them with and enclose them within a "address" array element:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190213/employee_doc_with_embedded_address.jpg" style="max-width: 100%;"></td></tr></li></ul><p style="font-size: 18px;"><b>Pros and Cons</b></p><p>This approach maintains all the related data in a single document, which makes it easy to retrieve and maintain. The whole document can now be retrieved in a single query:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190213/employee_embedded_address_query.jpg" style="max-width: 100%;"></td></tr></p><p>The drawback to Embedded Relationships is that if the embedded document keeps on growing too much in size, it can negatively impact read/write performance.</p><h1 class="blog-sub-title">Creating Referenced Relationships</h1><p>Using this approach, both the employee and address documents would be maintained separately, but the employees document would contain a field that references the address document's id field:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190213/employee_doc_with_referenced_address.jpg" style="max-width: 100%;"></td></tr><p>As shown above, the employee document contains the array field "address_ids" which contains ObjectIds of corresponding addresses. Using these ObjectIds, we can query the address documents and get address details from there.</p><p style="font-size: 18px;"><b>Pros and Cons</b></p><p>Although this approach keeps document sizes to a more manageable size, we now need two queries to fetch address details: one to retrieve the address_ids fields from employees document and another to fetch the addresses from the addresses collection:</p><p><font face="monospace">var result &nbsp;&nbsp;&nbsp;= db.employees.findOne({"name":"Tom Smith"},{"address_ids":1})<br/>var addresses = db.addresses.find({"_id":{"$in":result["address_ids"]}})</font></p><h1 class="blog-sub-title">Going Forward</h1><p>In the next blog, we'll learn how to use MongoDB Referenced Relationships (also referred to as Manual References) and DBRefs in Navicat for MongoDB. You can learn more about it on the <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">product page</a>. You can download the fully functional application and use it FREE for a fourteen day trial period!</p>]]></description>
</item>
<item>
<title>Deciding between NoSQL and Traditional Relational Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/994-deciding-between-nosql-and-traditional-relational-databases.html</link>
<description><![CDATA[<b>Jan 29, 2019</b> by Robert Gravelle<br/><br/><p>Selecting which database will manage all of your company's data can be a very daunting decision; one that will have long-term ramifications for both your employees, partners and customers. Perhaps you're already contemplating a few specific vendors? Not so fast! Have you taken the time to weigh the pros and cons of NoSQL versus traditional relational databases? If not, you've come to the right place. Let's get started!</p><h1 class="blog-sub-title">Relational Database Management Systems (RDBMSes)</h1><p>This category of databases, which, in addition to MySQL, includes Oracle, SQL Server and PostgreSQL, have a long history (since the 1970's) and have very developed best practices to achieve optimal performance. Case in point, computer scientist E.F. Codd developed a set of rules that must be followed in order for a database management system to be considered relational. Codd also introduced the concept of database normalization in 1971. Database normalization is the process of structuring a relational database in a way that reduces data redundancy while improving data integrity.</p><p>Strengths include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>Atomicity, Consistency, Isolation, Durability (ACID) compliance. ACID compliance reduces anomalies and protects the integrity of your database by suggesting precisely how transactions interact with the database.</li><li>Your data is organized in a structured way. Having your data organized with a rigid structure makes it easier to work with because you always know where to find each piece of data. Just be sure to keep an updated diagram of your schema.</li><li>RDBMS tools come tend to with high quality support, product suites and add-ons to manage these databases, due to the amount of time they've been on the market.</li></ul><p>The main problem with RDBMSes is scaling them as your database grows. There are techniques you can employ, such as sharding, but these are not trivial to implement.</p><h1 class="blog-sub-title">NoSQL Databases</h1><p>NoSQL Databases are your best choice for dealing with massive amounts of unstructured data or if your data requirements aren't clear at the outset. In such cases, you likely don't have the luxury of developing a schema that you would with a relational database. Thus, NoSQL Databases provide much more flexibility than their traditional relational counterparts.</p><p>Advantages include:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>NoSQL databases like CouchDB, MongoDB, Cassandra, and HBase are designed to work with Big data...really Big data. In fact, you can store huge amounts of data with little to no structure. Moreover, NoSQL databases permit data mixing, permitting different types of data to be stored together.</li><li>NoSQL databases can be scaled across multiple data centers out of the box with only minimum degree of effort.</li></ul><p>Of course, NoSQL Databases are not without disadvantages:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>The NoSQL community lacks the maturity of the MySQL user base since it is relatively new. While the community is rapidly growing, as it currently stands, SQL database management systems like MySQL still have an edge in terms of the base of highly experienced users</li><li>A major issue with NoSQL databases is the lack of reporting tools for performance testing and analysis. Compare that with traditional RDBMS, where you can find a wide range of monitoring tools to help you performance tune your instances - for example, <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor for MySQL/MariaDB</a>.</li><li>There's a lack of standardization. NoSQL vendors tend to employ their own syntax. These can be difficult to master and are usually incompatible with the SQL used in relational databases.</li><li> Quite often NoSQL databases sacrifice ACID compliance for processing speed and flexibility. Depending on your needs, this may cause problems.</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>As we saw in todays blog, the type of database you select for your organization depends largely on how youll be using it as well as what type of data youll be storing.  In the next blog, well be comparing some leading products from each category.</p>]]></description>
</item>
<item>
<title>Get a Health and Performance Overview of All your Instances in One Place!</title>
<link>https://www.navicat.com/company/aboutus/blog/989-get-a-health-and-performance-overview-of-all-your-instances-in-one-place.html</link>
<description><![CDATA[<b>Jan 8, 2019</b> by Robert Gravelle<br/><br/><p>Navicat Monitor for MySQL/MariaDB's starting screen is the Overview Dashboard. It's a one-stop shop of the real-time analytics for the health and performance of all your instances. Since the introduction of Compact View in version 1.7, you can now monitor hundreds of instances at a glance! In today's blog, we'll learn how to build a customized dashboard for your server metrics to get a global view of each instance, as well as apply instance grouping.</p><h1 class="blog-sub-title">One Screen, Two Views</h1><p>The Overview Dashboard can present instance information in one of two ways: Comfort View and the new Compact View that was introduced in version 1.7. Comfort View employs instance cards to let you identify the server status and system resource usage, while the latter presents a more streamlined set of data cards about each instance. Here's a comparison of each:</p><figure><figcaption>Comfort View</figcaption><img src="https://www.navicat.com/images/06.06_DiscoverNavicatMonitor_02_Dashboards_Comfort.png"></figure><p></p><figure><figcaption>Compact View</figcaption><img src="https://www.navicat.com/images/06.06_DiscoverNavicatMonitor_02_Dashboards_Compact.png"></figure><h1 class="blog-sub-title">Assessing Server Status</h1><p>Compact View shows the instance name, along with the number of Critical and Warning issues. For more information, choose Comfort View. Each card in Comfort View shows a bright green, orange, red, or gray Status Bar indicating the status of your instances at a glance. These colors correspond to Healthy, Warnings, Critical, and Paused. This allows you to easily identify instances which require immediate attention. You can click on the Status Bar to see a list of all warnings and/or alerts for that instance.</p><h1 class="blog-sub-title">Customizing Card Metrics</h1><p>By default, instance cards show all available system resource usages. Click the "X / Y Shown" label and uncheck the metrics that you are not interested in. You can also use it to change the display style that works best for you by choosing Compact View or Comfortable View:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190108/view-and-metrics-list.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Getting More Information about a Metric</h1><p>Hovering the mouse pointer over a metric in a card brings up a small popup chart:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190108/network-io-chart.jpg" style="max-width: 100%;"></td></tr><p>Moreover, hovering the mouse pointer over on the chart shows the time and the values at that point in time.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190108/network-io-chart-hover.jpg" style="max-width: 100%;"></td></tr><p>You can click anywhere on an instance to view its details and metrics:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190108/instance-details.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Changing the Order of Instances</h1><p>Want to change the order of your instances? Just click "SORT BY" and select a sorting option. If you choose "Alert Severity", the instance cards will be sorted by the severity level from critical to low. Selecting "Custom" allows you to position instances however you wish. To do that, click and hold the connection icon on an instance card and then drag-and-drop it to the desired position. Navicat Monitor automatically remembers your custom order!</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190108/sort-by-list.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Filtering Instances</h1><p>There are a couple of ways to filter instances. One way is to click an availability group name label above the instances:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190108/groups.jpg" style="max-width: 100%;"></td></tr><p>You can also filter instances by their health states. At the top of the screen, there are four colored checkboxes with labels that show the total number of servers having critical alerts (red), warnings (orange), that are healthy (green), and that have paused or stopped monitoring (grey). Depending on its state, clicking a checkbox will either add or remove the instances by states in the selected group.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2019/20190108/health-state-boxes.jpg" style="max-width: 100%;"></td></tr>]]></description>
</item>
<item>
<title>Optimize Query Performance using the Navicat Query Analyzer (Part 2)</title>
<link>https://www.navicat.com/company/aboutus/blog/976-optimize-query-performance-using-the-navicat-query-analyzer-part-2.html</link>
<description><![CDATA[<b>Dec 31, 2018</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">The Query Analyzer Section</h1><p>Navicat Monitor for MySQL/MariaDB's Query Analyzer tool provides a graphical representation of the query logs that makes interpreting their contents much easier. In addition, the Query Analyzer tool enables you to monitor and optimize query performance, visualize query activity statistics, analyze SQL statements, as well as quickly identify and resolve long running queries. <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/975-optimize-query-performance-using-the-navicat-query-analyzer-part-1.html" target="_blank">Last week's blog</a> provided an overview of this useful feature and described how to take full advantage of the Latest Deadlock Query and Process List screen sections. In this 2nd and final installment, we will learn all about the Query Analyzer screen section.</p><h1 class="blog-sub-title">How it Works</h1><p>The Query Analyzer collects information about query statements using one of the following three methods:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Retrieve the General Query Log from the server and analyze its information.</li><li>Retrieve the Slow Query Log from the server and analyze its information.</li><li>Query the performance_schema database and analyze it for specific performance information.<br/><br/>With regards to the Performance Schema, it was introduced in MySQL Server 5.5.3. It normalizes Query statements and truncates them to a length of 1024 bytes. Moreover, similar queries whose only difference are the literal values are combined. Finally, quoted values and numbers are replaced by a question mark (?).</li></ul><p>You'll find the Query Analyzer section below the Latest Deadlock Query and Process List sections that we covered last week:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/query_analyzer.jpg" style="max-width: 100%;"></td></tr><p>The Query Analyzer section is itself divided into two subsections: Top 5 Queries and Query Table. We'll look at those now.</p><h1 class="blog-sub-title">Top 5 Queries</h1><p>This section shows the top 5 most time-consuming queries, along with a color-coded donut chart that gives you an immediate snapshot of potential issues. You can click the refresh button at any time to update the top 5 queries list.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/refresh.jpg" style="max-width: 100%;"></td></tr><p>The Top 5 Queries section contains the following fields:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>Top 5 Queries Based on Total Time: The query statement.</li><li>Count: The number of times that the query has been executed.</li><li>Total Time: The cumulative execution time for all the executions of the query.</li></ul><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/top_5_queries.jpg" style="max-width: 100%;"></td></tr><p>The source of the query data is shown in a dropdown list next to the section title. You can select another source by choosing it from the list.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/source_list.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Query Table</h1><p>The query table provides the summary information for all executed queries. Calculated statistics include a Count, Query Occurrence, Time total, and many others.</p><p style="margin-left: 24px; line-height: 20px;"><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/query_table.jpg" style="max-width: 100%;"></td></tr><br/><br/>It boasts many useful features:<br/><br/><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>You can hover over a query to show the full query statement and click "Copy Query" to copy it.<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/copy_query.jpg" style="max-width: 100%;"></td></tr></li>      <br/><li>Click "Show / Hide Columns" and select the columns that you want to hide. Select "Restore Default" to restore the table to its default settings.<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/show-hide_columns.jpg" style="max-width: 100%;"></td></tr></li>      <br/><li>Queries can be filtered and sorted. Simply enter a search string in the Search for a query box to filter the table and click the column name to sort the table.<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/query_search_and_sort.jpg" style="max-width: 100%;"></td></tr></li>      <br/><li>To change the number of queries per page, click "Rows to Display" and select a value from the list.<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/rows_to_display.jpg" style="max-width: 100%;"></td></tr></li>      <br/><li>To change the total number of queries in the table, click "Total no. of Queries" and select a number from the list.<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181231/total_no_of_queries.jpg" style="max-width: 100%;"></td></tr></li></ul></p><p>Looking to purchase Navicat Monitor for MySQL/MariaDB? It's now available via <a class="default-links" href="https://www.navicat.com/en/store/navicat-monitor-plan" target="_blank">monthly and yearly subscriptions!</a></p>]]></description>
</item>
<item>
<title>Optimize Query Performance using the Navicat Query Analyzer (Part 1)</title>
<link>https://www.navicat.com/company/aboutus/blog/975-optimize-query-performance-using-the-navicat-query-analyzer-part-1.html</link>
<description><![CDATA[<b>Dec 24, 2018</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Overview, Latest Deadlock Query and Process List screens</h1><p>As touched upon in the last blog series on the MySQL/MariaDB logs, one of the primary complaints levied by database administrators (DBAs) about the General and Slow Query logs is that their contents are difficult to read. The solution? Monitor your logs using Navicat Monitor for MySQL/MariaDB! Its Query Analyzer tool provides a graphical representation for the query logs that enables you to monitor and optimize query performance, visualize query activity statistics, analyze SQL statements, as well as quickly identify and resolve long running queries. Today's blog will provide an overview of this useful feature as well as describe how to take full advantage of the Latest Deadlock Query and Process List screens. Part 2 will be devoted to the Query Analyzer screen section.</p><h1 class="blog-sub-title">The Query Analyzer at a Glance</h1><p>To start using Query Analyzer, select the instance that you want to analyze in the left pane:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181224/instances.jpg" style="max-width: 100%;"></td></tr><p>You can also narrow down the list to the instance you're looking for by entering the name in the Search field:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181224/search_for_instance.jpg" style="max-width: 100%;"></td></tr><p>Upon selecting an instance, analysis begins immediately. After a short time, results of the analysis are displayed:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181224/query_analyzer.jpg" style="max-width: 100%;"></td></tr><p>The screen is divided into the following sections:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>Latest Deadlock Query: Shows the transaction information of the latest deadlock detected in the selected instance.</li><li>Process List: Displays the total number of running processes for the selected instance, and lists the last 5 processes including ID, command type, user, database and time information.</li><li>Query Analyzer: Displays information about query statements with customizable and sortable columns.</li></ul><p>The remainder of the blog will cover the first two sections above in more detail.</p><h1 class="blog-sub-title">Latest Deadlock Query</h1><p>If you'd like to see more than the latest deadlock, you can click the View All button. Doing so opens the Deadlock page. It displays all deadlocks detected on the selected instance:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181224/deadlock_screen.jpg" style="max-width: 100%;"></td></tr><p>All monitored instances are shown in the left pane. Selecting an instance brings up deadlocks for that instance. You can filter the list by providing a value in the "Search for a deadlock" text box.</p><p>By default, the deadlock list refreshes every 5 seconds automatically. You can change the auto-refresh time using the Refresh Time drop-down menu. To pause the auto refresh, click the Pause button:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181224/refresh_time_and_rows_ro_display.jpg" style="max-width: 100%;"></td></tr><p>You can also set the number of rows to display via the Rows to Display drop-down menu.</p><h1 class="blog-sub-title">Process List</h1><p>You can click View All to view all processes.</p><p>The Process List page displays all processes currently running on the selected instance. You can check which queries are currently being executed. The process list provides the following detailed information:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>ID: The thread ID.</li><li>User@Host: The user who issued the statement.</li><li>DB: The database that the user is currently used.</li><li>Command: The type of command that the user issued.</li><li>Time: The time in seconds that the thread has been in its current state.</li><li>State: The state that indicates what the thread is doing.</li><li>Info: The statement that the user issued.</li></ul><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181224/process_list.jpg" style="max-width: 100%;"></td></tr><p>As with Deadlocks, all monitored instances are shown in the left pane, where you can select an instance to show its process list. Also like deadlocks, the process list refreshes every 5 seconds automatically. It also includes a Refresh Time drop-down menu to change the auto-refreshing time. Clicking the Pause button pauses auto refreshing.</p><p>The list of threads can be filtered and sorted. Simply enter a search string in the Search for a thread box to filter the list and click the column name to sort the list. Moreover, clicking on Rows to Display and selecting a predefined number changes the number of threads shown per page.</p><p><b>Terminating a Process</b></p><p>In additional to showing you currently running processes, you can stop a thread instantly by clicking in the Action column, and then clicking "End Process" in the pop-up dialog:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181224/end_process.jpg" style="max-width: 100%;"></td></tr><p>Thinking about purchasing Navicat Monitor for MySQL/MariaDB? It's now available via <a class="default-links" href="https://www.navicat.com/en/store/navicat-monitor-plan" target="_blank">monthly and yearly subscription!</a></p>]]></description>
</item>
<item>
<title>Receive Notifications for MySQL/MariaDB Issues</title>
<link>https://www.navicat.com/company/aboutus/blog/974-receive-notifications-for-mysql-mariadb-issues.html</link>
<description><![CDATA[<b>Dec 18, 2018</b> by Robert Gravelle<br/><br/><p>One of the main roles of database monitoring is to catch potential issues before they develop into real problems. To that end, <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">Navicat Monitor</a> for MySQL/MariaDB offers advanced root cause analysis that enables you to find in-depth information when an issue occurs. This functionality is part of the Alerts feature.</p><p>The Alert Details screen provides an overview of the selected alerts that comprises its summary, timeline, metric charts, and more. Navicat also maintains an Alerts History that you can browse through the alert table, open a particular alert, assign it to a user, or select multiple alerts at a time.</p><p>But perhaps Navicat Alerts' most useful feature is being able to notify you via email, SMS, SNMP or Slack whenever a warning or critical condition occurs in your infrastructure. In today's tip, we'll learn how to setup a custom alert.</p><h1 class="blog-sub-title">Setting Alert Policies</h1><p>In Navicat Monitor for MySQL/MariaDB, you can set custom alert thresholds to monitor your infrastructure and receive alerts when the threshold rules that you defined are reached. For example: when CPU utilization exceeds 90% for more than 30 minutes. You can also customize thresholds to trigger alerts for specific instances and groups, and set sending alert notifications to whom.</p><p>Alert is triggered when a monitored metric value crosses a specified threshold for a certain duration. You can enable or disable alerts and change their thresholds as well as inherit settings. To configure the alert policy, go to Configurations &gt; Alert Policy.</p><p>The Alert Type table displays all available alerts and their details. There are three types of alerts: System, Security and Performance.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181218/alert_policy_screen.png" style="max-width: 100%;"></td></tr><p>If we wanted to enable alerts for CPU Usage, we would click the "CPU Usage" label in the table row. (You can also configure multiple alerts simultaneously by checking the box beside each Alert and then clicking the "Configure Alerts" button.)</p><p>You can see that Navicat provides default values for each Alert. For instance, the CPU Usage Alert defines a warning condition at 70% of capacity, and a critical condition at 90%. Moreover, a threshold is triggered when either a warning or critical condition lasts for a minimum of 5 minutes. When triggered, an email notification is sent to All Users of the database.</p><p>All of these parameters may be changed on the CPU Usage details screen:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181218/configure_cpu_usage.jpg" style="max-width: 100%;"></td></tr><p>Let's say that we wish to raise our threshold values and send emails to the DBAs after a warning or critical condition exists for ten minutes, we could modify the alert details as follows:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181218/cpu_usage_details_screen.jpg" style="max-width: 100%;"></td></tr><p>We can even send notifications to additional parties by including a comma delimited list of emails in the Alternative email addresses field.</p><p>Finally, click the Save button to update the Alert settings.</p><h1 class="blog-sub-title">Setting Up Notifications</h1><p>Navicat Monitor provides three options for sending notifications whenever an alert is raised in your monitored database instances or a system problem occurs while you are using them. The three options are: emails, SNMP traps and SMS messages. To configure the alert notifications, go to Configurations &gt; Notifications.</p><p>Here are some example values to send email using a specific email address:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181218/email_notification_details.jpg" style="max-width: 100%;"></td></tr><p>You're now all set up to receive email notifications!</p>]]></description>
</item>
<item>
<title>Working with MySQL Logs (Part 3): the Slow Log</title>
<link>https://www.navicat.com/company/aboutus/blog/972-working-with-mysql-logs-part-3-the-slow-log.html</link>
<description><![CDATA[<b>Dec 11, 2018</b> by Robert Gravelle<br/><br/><p>Welcome back to this blog series on MySQL logging. We'll be referencing these first three installments later on when we talk about monitoring in Navicat Monitor for MySQL/MariaDB. <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/970-working-with-mysql-logs.html" target="_blank">Part I</a> provided an overview of the different log types on MySQL, highlighted the most important of these, and covered the first two in the list. <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/971-working-with-mysql-logs-part-2-the-binary-log.html" target="_blank">Part II</a> presented the binary log in more detail. The Slow Log will be the topic of today's blog.</p><p>The slow query log contains SQL statements that take more than a certain amount of time to execute and require a given number of rows to be examined by a query. It's an important one because it greatly simplifies the task of finding inefficient or time-consuming queries, which, as I'm sure you well know, can adversely affect database and overall server performance.</p><h1 class="blog-sub-title">Slow Query Log Parameters</h1><p>You might be wondering what exactly constitutes a "slow" and/or "inefficient" query. Obviously, there is no universal one-size-fits-all answer, but the makers of MySQL - Oracle - place it at 10 seconds, which happens to be the maximum value of the long_query_time threshold variable. The minimum value of 0 causes all queries to be logged. The value can also be specified to a resolution of microseconds if you want to get very specific.</p><p>By default, administrative statements as well as queries that do not use indexes for lookups are not logged. Having said that, this behavior can be changed using log_slow_admin_statements and log_queries_not_using_indexes variables.</p><p>If you don't specify a name for the slow query log file, it will be named <i>host_name-slow.log</i>. The server creates the file in the data directory unless an absolute path name is given to specify a different directory. You can utilize the slow_query_log_file to specify the name of the log file.</p><h1 class="blog-sub-title">Slow Query Log Format</h1><p>Here's what a typical slow query entry might look like:</p><font face="monospace">root@server# tail /var/log/slowqueries<br/># Time: 130320  7:30:26<br/># User@Host: db_user[db_database] @ localhost []<br/># Query_time: 4.545309  Lock_time: 0.000069 Rows_sent: 219  Rows_examined: 254<br/>SET timestamp=1363779026;<br/>SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes';<br/></font><p>The server will write less information to the slow query log if you use the --log-short-format option. Conversely, enabling the log_slow_extra system variable (available as of MySQL 8.0.14) will cause the server to write several extra fields to the log.</p><h1 class="blog-sub-title">Enabling Slow Query Logging</h1><p>The slow query log is disabled, so you have to turn it on by setting the --slow_query_log variable to 1 (ON in Navicat). Likewise, providing no argument also turns on the slow query log. An argument of 0 (OFF in Navicat) disables the log.</p><p>In Navicat, you can look up system variables using the Server Monitor tool. It's accessible via the Tools main menu command.</p><p>In the Server Monitor, click on the middle "Variables" tab and scroll down to see the slow_query_log and slow_query_log_file server variables in the list:</p><p style="margin-left: 24px;"><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181211/slow_query_log_vars_in_navicat.jpg" style="max-width: 100%;"></td></tr><br/>The slow_query_log server variables in the Navicat Server Monitor tool</p>]]></description>
</item>
<item>
<title>Working with MySQL Logs (Part 2): the Binary Log</title>
<link>https://www.navicat.com/company/aboutus/blog/971-working-with-mysql-logs-part-2-the-binary-log.html</link>
<description><![CDATA[<b>Dec 4, 2018</b> by Robert Gravelle<br/><br/><p>Logging is about recording what happened in your databases. Just as some people might keep a personal journal to write down what happens in their daily lives, a database log keeps track of things like logins and transactions. More importantly, an effective log should include entries about access control and input validation failures. Is it any wonder then that the only MySQL log that is enabled by default is the error log (at least on Windows)?<p><p><a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/970-working-with-mysql-logs.html" target="_blank"> Last week's blog</a> provided an overview of the different log types on MySQL, highlighted the most important of these - namely, the error, general, binary, and slow logs - and covered the first two of these. Today we'll be taking a look at the binary log in more detail. That will leave the slow log for Part 3.</p><h1 class="blog-sub-title">Statements Recorded by the Binary Log</h1><p>The binary log stores events that describe database changes, for example, table creation operations or changes to table data via statements such as INSERT and UPDATE. Events for statements that potentially could have made changes, such as a DELETE which matched no rows, are also saved for posterity, except where row-based logging is used (see below for more on this). Hence, the binary log does not include statements such as SELECT or SHOW that do not modify data. These would be found in the general query log.</p><p>The binary log serves two important purposes:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>For replication, the binary log on a master replication server provides a record of the data changes to be sent to slave servers. In fact, the master server sends the events contained in its binary log to its slaves, so that they execute those same commands in order effectuate identical data changes as on the master.</li><li>Certain data recovery operations make use of the binary log. After a backup has been restored, the events in the binary log pertaining to the backup are re-executed in order to synchronize databases to the point that the backup took place.</li></ul><p>Despite these very significant uses, binary logging is disabled by default as it can degrade performance slightly. However, the benefits offered by the binary log in setting up replication and for restoring from a backup generally tend to outweigh this minor performance hit.</p><h1 class="blog-sub-title">Binary Logging Formats</h1><p>MySQL offers three logging formats for binary logging, each with its own pros and cons. Unlike other logs, you can't enable it using a simple ON/OFF switch. Instead, you have to select the binary logging format explicitly by starting the MySQL server with "--binlog-format=type". The exact statements for each type are described below:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li><b>Statement-Based</b><br/><p>Statement-based logging logs all SQL statements that make changes to the data or structure of a table. Enable with --binlog-format=STATEMENT.</p><p>Certain non-deterministic statements may not be safe for replication. If MySQL determines this to be the case, it will issue the warning "Statement may not be safe to log in statement format".</p></li><li><b>Row-Based</b><br/><p>In row-based logging, the master writes events to the binary log that indicate how individual table rows are affected. For that reason, it is important that tables always include a primary key to ensure rows can be efficiently identified. You can tell the server to use row-based logging by starting it with --binlog-format=ROW.</p></li><li><b>Mixed</b><br/><p>A third option is mixed logging. With this logging format, statement-based logging is used by default, but the logging mode switches automatically to row-based in certain cases. To use mixed logging, start MySQL with the option --binlog-format=MIXED.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181204/binlog_format.jpg" style="max-width: 100%;"></td></tr><br/>Fig.1 - the binlog_format server variable in the Navicat Server Monitor tool</li></ul>]]></description>
</item>
<item>
<title>Working with MySQL Logs</title>
<link>https://www.navicat.com/company/aboutus/blog/970-working-with-mysql-logs.html</link>
<description><![CDATA[<b>Nov 27, 2018</b> by Robert Gravelle<br/><br/><p>In software applications, log files keep a record of what actions were performed in the system and perhaps who performed them. Should something unexpected occur, whether it be a security breach, system crash, or just sluggish performance, the log file(s) can be an administrator's best friend. As it happens, MySQL has several different log files that can help you find out what's going on inside the MySQL server. Today's blog is a primer on MySQL logging - a topic that we'll be referencing later on when we talk about monitoring in Navicat Monitor for MySQL/MariaDB.</p><h1 class="blog-sub-title">Log Types</h1><p>MySQL can support several log types, but bear in mind that, by default, no logs are enabled except for the error log on Windows. Here's a list of types:</p><head><style>table, th, td {    border: 1px solid black;    border-collapse: collapse;}th, td {    padding: 5px;    text-align: left;}</style></head><body><table border="1"><tr><th>Log file</th><th>Description</th></tr><tr><td>The error log</td><td>Problems encountered when starting, running, or stopping <b>mysqld</b>.</td></tr><tr><td>The isam log</td><td>Logs all changes to the ISAM tables. Used only for debugging the ISAM code.</td></tr><tr><td>The general query log</td><td>Established connections and executed queries.</td></tr><tr><td>The update log</td><td>Deprecated: stores all statements that change data.</td></tr><tr><td>The binary log</td><td>Stores all statements that change something. Used also for replication.</td></tr><tr><td>The slow log</td><td>Stores all queries that took more than <b>long_query_time</b> to execute or didn't use indexes.</td></tr></table></body><p>Out of these, the most important are the error, general, binary, and slow logs, so we'll focus on the first two today, and the last two next week.</p><h1 class="blog-sub-title">The error log</h1><p>Your first resource when troubleshooting server issues is the error log. MySQL server uses the error log to record information relevant to any issue which prevents the server from starting. You'll find the error log in the data directory specified in your my.ini file. The default data directory location in Windows is "C:\Program Files\MySQL\MySQL Server 5.7\data", or "C:\ProgramData\Mysql". Note that the "C:\ProgramData" directory is hidden by default, so you may need to change your folder options to see the directory and its contents.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181127/error_log.jpg" style="max-width: 100%;"></td></tr><br/>Fig.1 - the MySQL Error log in Windows<br/><p>For other platforms, it may be helpful to refer to the log_error config variable. If you use Navicat to manage your database(s), you can look up system variables using the Server Monitor tool. It's accessible via the Tools main menu command.</p><p>In the Server Monitor, click on the middle "Variables" tab and scroll down to log_error in the list:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181127/error_log_variable.jpg" style="max-width: 100%;"></td></tr><br/>Fig.2 - the log_error server variable in the Navicat Server Monitor tool<br/><h1 class="blog-sub-title">The General Query Log</h1><p>As the name implies, the general query log provides a general record of what MySQL is doing. The server writes information to this log when clients connect or disconnect, as well as each SQL statement received from clients. The general query log can be very useful when you suspect an error in a client application and want to know exactly what the client sent to the database.</p><p>By default, the general query log is disabled. To enable it, set the general_log variable to 1 (or ON in Navicat). Not assigning any value to general_log also enables it. Setting it back to 0 (or OFF in Navicat) disables the log. To specify a log file name, assign it to the general_log_file variable. To specify that the log outputs to a file, use the log_output system variable to assign the file name. MySQL can also send output to the slow_log tables in the mysql system database. In fact, file output, table output, or both can be selected. We'll talk about that in greater detail in the next blog.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181127/general_log_variables.jpg" style="max-width: 100%;"></td></tr><br/>Fig.3 - the general_log and general_log_file server variables in the Navicat Server Monitor tool<br/>]]></description>
</item>
<item>
<title>Configure an Instance in Navicat Monitor for MySQL/MariaDB</title>
<link>https://www.navicat.com/company/aboutus/blog/967-configure-an-instance-in-navicat-monitor-for-mysql-mariadb.html</link>
<description><![CDATA[<b>Nov 20, 2018</b> by Robert Gravelle<br/><br/><p>Navicat Monitor for MySQL/MariaDB is an agentless remote server monitoring tool that is packed with features to make monitoring your database (DB) instances as effective and easy as possible. Moreover, its server-based architecture makes it accessible from anywhere via a web browser, thus providing you unhampered access to easily and seamlessly track your servers from anywhere in the world, at any time of day or night.</p><p>Once you have finished installing Navicat Monitor and have logged in, you're ready to create the instances you want to monitor. In today's blog, we'll learn how to configure a DB instance for monitoring.</p><h1 class="blog-sub-title">Creating a New Instance</h1><p>You can create new instances on the following pages by clicking "New Instance" and selecting the server type. You'll find the New Instance button on both the Overview and Configuration screens. Did you know that you can create up to 1000 instances?!</p><p>The Overview dashboard page shows all instances that are monitored by Navicat Monitor. It provides high-level summary information and the healthy status of your instances, and identifies instances which require critical attention.</p><p>Instance information is presented on instance cards that let you identify the server status and system resource usage. To create a new instance to monitor your server, click on "New Instance" and select the server type from the dropdown list:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181120/overview.jpg" style="max-width: 100%;"></td></tr><p>Then, enter the appropriate information in the New Instance window.</p><p>In the New Instance window, enter a descriptive name in the Instance Name field. I like to give the instance the same name that I gave it in <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>. Select the Group for your instance. If you want to add a new group, you can do so by clicking "New Group". Then, provide the following information to connect your server:</p><p>Navicat Monitor can connect the database server over a secure SSH tunnel to send and receive monitoring data. It allows you to connect your servers even if remote connections are disabled or are blocked by firewalls.</p><p>In the MySQL Server or MariaDB Server section, enter the following information:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Host Name: The host name or IP address of the database server.</li><li>Port: The TCP/IP port for connecting to the database server.</li><li>Username: A monitoring user for connecting to the database server.<br/>I would recommend creating a separate account for the monitoring user which does not cause load on the monitored instance. You should grant REPLICATION CLIENT, SUPER, PROCESS and SELECT privileges on all database objects to the monitoring user.</li><li>Password: The login password of the monitoring user.</li><li>Server Type: The type of the server. Can be Unix-like or Windows.</li></ul><p>Navicat Monitor can also collect the DB server's system performance metrics such as CPU and memory resources. If you do not provide this login, you can still monitor your server, but system performance metrics will not be shown.</p><p>Finally, click "New "to create the new instance.</p><p>Here's the New Instance dialog with all of the fields filled in:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181120/new_instance_dialog.jpg" style="max-width: 100%;"></td></tr><p>Here's the "New Instance" button on the Instance Configuration screen. It's accessible via Configurations > All Instances:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181120/configuration.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Token Activation</h1><p>Once your trial period is finished, Navicat Monitor requires tokens to continue monitoring that instance. Tokens may be purchased from the Navicat website. To manage your tokens and license your instances, go to Configurations &gt; Activation.</p><p>To activate the instance, locate it in the Unlicensed Instances list, check the box beside it, and click the License button to move it into the Licensed Instances list:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181120/activation.jpg" style="max-width: 100%;"></td></tr><p>You're now ready to start monitoring your instance!</p>]]></description>
</item>
<item>
<title>Storing Large Files in MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/964-storing-large-files-in-mongodb.html</link>
<description><![CDATA[<b>Nov 13, 2018</b> by Robert Gravelle<br/><br/><p>MongoDB employs a serialization format called "BSON" to store documents. A combination of the words "Binary" and "JSON" (JavaScript Object Notation), you can think of BSON as a binary representation of JSON documents. Unfortunately, the BSON serialization format has a size limitation of 16 MB. While that leaves plenty of headroom for most data types, for some large binary formats, MongoDB employs a separate specification called GridFS for storing and retrieving files.</p><p>In today's blog, we'll be taking a look at how Navicat for MongoDB implements the GridFS spec to store large files.</p><h1 class="blog-sub-title">About GridFS</h1><p>To get around the 16 MB limit, GridFS divides the file into parts, or chunks, and stores each chunk as a separate document. By default, GridFS uses a default chunk size of only 255 KB. The file is divided into chunks of 255 KB, with the exception of the last chunk, which is whatever bytes are left. Likewise, files that are smaller than the chunk size have only a final chunk, using only as much space as needed plus a bit of additional metadata.</p><p>Behind the scenes, GridFS actually uses two collections to store files: one collection to store the file chunks, and the other to store file metadata.</p><p>GridFS is useful not only for storing files that exceed 16 MB but also for storing any file that you want access without having to load the entire file into memory.</p><h1 class="blog-sub-title">Storing a Large File in Navicat for MongoDB</h1><p>Navicat supports GridFS buckets and provides a tool for this very purpose. Clicking the large GridFS button on the main toolbar displays a new tab, which includes several commands for working with your files. If you haven't previously added any files, only the New Bucket button is enabled:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181113/gridfs_toolbar2.jpg" style="max-width: 100%;"></td></tr><p>Suppose that you have a large video file that you'd like to include in your movie database. You'll need a bucket in which to add your file, so click the New Bucket button on the toolbar and enter a name for your bucket in the GridFS Bucket Name dialog.</p><p>You can add your file by clicking on Upload File:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181113/upload_btn.jpg" style="max-width: 100%;"></td></tr><p>It will bring up the File Browse dialog so that you can navigate to your file. Clicking the Upload button in the File Browse dialog commences the upload. The progress will be shown in a progress bar at the bottom of the screen:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181113/upload_progress_bar.jpg" style="max-width: 100%;"></td></tr><p>Once completed, you'll be able to view the file details:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181113/file_details.jpg" style="max-width: 100%;"></td></tr><p>The file details correspond to the GridFS files collection fields:</p><font face="monospace">{<br/>&nbsp;&nbsp;"_id" : &lt;ObjectId&gt;,<br/>&nbsp;&nbsp;"length" : &lt;num&gt;,<br/>&nbsp;&nbsp;"chunkSize" : &lt;num&gt;,<br/>&nbsp;&nbsp;"uploadDate" : &lt;timestamp&gt;,<br/>&nbsp;&nbsp;"md5" : &lt;hash&gt;,<br/>&nbsp;&nbsp;"filename" : &lt;string&gt;,<br/>&nbsp;&nbsp;"contentType" : &lt;string&gt;,<br/>&nbsp;&nbsp;"aliases" : &lt;string array&gt;,<br/>&nbsp;&nbsp;"metadata" : &lt;any&gt;,<br/>}</font><p>Navicat can work with much smaller files as well. It's particularly adept at working with images; it even has a preview feature so that you can view images without having to open them in another tool:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181113/file_preview.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Retrieving Files</h1><p>Once a file has been uploaded into your database, you don't have to keep a copy on the file system. Anytime you need it, just click the Download button and select the folder in which to save the file:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181113/download.jpg" style="max-width: 100%;"></td></tr><p>And that concludes our exploration of GridFS Buckets in Navicat for MongoDB!</p>]]></description>
</item>
<item>
<title>New Features in Navicat Monitor 1.8</title>
<link>https://www.navicat.com/company/aboutus/blog/963-new-features-in-navicat-monitor-1-7.html</link>
<description><![CDATA[<b>Nov 6, 2018</b> by Robert Gravelle<br/><br/><p>The Navicat team is proud to announce the launch of Navicat Monitor 1.8. This minor update adds a couple of exciting features:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>The dashboard adds a compact view.</li><li>Support for the <a class="default-links" href="https://slack.com/" target="_blank">Slack</a> collaboration hub, so now you can get notifications via Slack whenever a warning or critical condition occurs in your infrastructure.</li></ul><p>Today's blog examines both features and describes how to download the new version.</p><h1 class="blog-sub-title">Dashboard Compact View</h1><p>The dashboard was designed to allow the monitoring of several of database instances at a glance. It presents an easy to read, one page summary of the real-time analytics related to the health and performance of all your instances. Moreover, you can customize the dashboard for your preferred server metrics to get a global view of each instance, as well as apply instance grouping to allow smooth navigation between each of them.</p><tr><td align="middle"><img src="https://www.navicat.com/images/06.06_DiscoverNavicatMonitor_02_Dashboards_Comfort.png" style="max-width: 100%;"></td></tr><p>The new Compact View presents a more streamlined set of data cards about each instance, allowing you to include hundreds of database instances within a single screen!</p><tr><td align="middle"><img src="https://www.navicat.com/images/06.06_DiscoverNavicatMonitor_02_Dashboards_Compact.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Slack Alerts</h1><p>Slack is a collaboration hub that connects members of an organization to help get work done within a team environment. The Slack platform includes notifications for things that need your attention. Notifications are received whether you're using Slack on your desktop or from your mobile device.</p><p>Using Navicat Monitor, DBAs may now choose to receive alerts via Slack, in addition to email, SMS, and SNMP, if they so choose.</p><tr><td align="middle"><img src="https://www.navicat.com/images/02.Product_01_NavicatMonitor_03b_Notifications.png" style="max-width: 100%"></td></tr><p>You'll find the Slack option grouped with the other notification settings on the Alert Policy Details screen. Here's how to get there:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>The Configurations &lt; Alert Policy screen contains a list of metrics to alert users about. Alert types are grouped by System Alerts, Security Alerts, and Performance Alerts:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181106/alert_policy.jpg" style="max-width: 100%;"></td></tr></li><br/><li>Clicking an alert brings up the Alert Policy for that metric:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181106/notification_settings.jpg" style="max-width: 100%;"></td></tr></li><br/><li>The Notification Settings are located at the bottom of the page. These include when to send notification, how to send them, and to whom.</li></ul><h1 class="blog-sub-title">Downloading Navicat Monitor 1.8</h1><p>The new version of <a class="default-links" href="https://www.navicat.com/en/download/navicat-monitor" target="_blank">Navicat Monitor</a> is available for download on the Navicat site. Installation options run the gamut from Windows, macOS, macOS Homebrew, Linux, Linux Repos, Docker, to FreeBSD. You also have the option of selecting either an online or offline installer.</p><p>You can learn more about Navicat Monitor on the <a class="default-links" href="https://www.navicat.com/en/products/navicat-monitor" target="_blank">product page</a>. The <a class="default-links" href="https://www.navicat.com/en/discover-navicat-monitor" target="_blank">Discover Navicat Monitor</a> page has even more details.</p>]]></description>
</item>
<item>
<title>Editing User Roles in Navicat for MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/938-editing-user-roles-in-navicat-for-mongodb.html</link>
<description><![CDATA[<b>Oct 30, 2018</b> by Robert Gravelle<br/><br/><p>Navicat for MongoDB includes GUI Designers for both Users and Roles. We were introduced to the User Designer in the last blog. Today, we'll learn how to edit user roles using Navicat's Role Designer.</p><h1 class="blog-sub-title">Accessing User Roles</h1><p>Both users' information and privileges are stored on the server, because Navicat employs MongoDB's native commands behind the scenes. The User and Role commands are located in the main window toolbar. Clicking the button opens the user/role object list:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181030/navicat_user_command.jpg" style="max-width: 100%"></td></tr><p>In the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/914-introduction-to-user-role-management-in-mongodb.html" target="_blank">last blog</a>, we chose the User item. This time, we'll select Role. That brings up the Roles Toolbar in the Objects tab, along with a list of roles for that database. For instance, here are the user roles that we created in the last blog:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181030/roles_toolbar.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Working with the Role Tab</h1><p>Highlighting a role enables the Edit Role and Delete Role buttons. Clicking on Edit Role then opens the role in a new tab. It contains a number of tabs; in fact, both the create and edit role actions share the same tabs. The difference is that, in the case of edits, the Role Name is pre-populated in the General tab and read-only:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181030/edit_role_tabs.jpg" style="max-width: 100%"></td></tr><br/><p>Here's a quick rundown of the Role tabs:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li><b>General Properties:</b>Role Name: Defines a name for the role.</li><li><b>Built-In Roles</b>Use this list to assign the role to be a member of the selected built-in role.</li><li><b>User-Defined Roles</b>Use this list to assign the role to be a member of the selected user-defined role.</li><li><b>Members (Roles)</b>Use this list to assign the selected role to be a member of this role.</li><li><b>Members (Users)</b>Use this list to assign the selected user to be a member of this role.</li><li><b>Authentication Restrictions</b>To edit specific authentication restrictions that the server enforces on the role, click Add Restriction.</li><li><b>Client Source</b>Specifies a list of IP addresses or CIDR ranges to restrict the client's IP address.</li><li><b>Server Address</b>Specifies a list of IP addresses or CIDR ranges to which the client can connect.</li></ul><br/><p style="font-size: 18px;"><b>About Authentication Restrictions</b></p><p>New to version 3.6, an authentication restriction specifies a list of IP addresses and Classless Inter-Domain Routing (CIDR) ranges from which the user is allowed to connect to the server or from which the server can accept users.</p><p>The authenticationRestrictions document can contain only the following two fields. The server throws an error if the authenticationRestrictions document contains an unrecognized field:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px"><li><b>clientSource</b>is an array of IP addresses and/or CIDR ranges. The server will verify that the client's IP address is either in the given list or belongs to a CIDR range in the list, when one or more values are present. If the client's IP address is not found, the server will not authenticate the user.<br/><br/>In Navicat, clientSource values may be added directly to the Client Source field, as a comma-separated list, or via the Client Source dialog. It is activated by clicking on the ellipsis [...] button at the right of the field:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181030/client_source_dialog.jpg" style="max-width: 100%"></td></tr></li><li><b>serverAddress</b> is an array of IP addresses and/or CIDR ranges to which the client can connect. If one or more values are present, the server will verify that the client's connection was accepted via an IP address in the given list. If the connection was accepted via an unrecognized IP address, the server does not authenticate the user.</li></ul><p>In Navicat, serverAddress values may be added directly to the Server Address field, as a comma-separated list, or via the Server Address dialog. It is activated by clicking on the ellipsis [...] button at the right of the field:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181030/server_address_dialog.jpg" style="max-width: 100%"></td></tr><p>Don't forget to click the Save button to save the updated role!</p>]]></description>
</item>
<item>
<title>Introduction to User &amp; Role Management in MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/914-introduction-to-user-role-management-in-mongodb.html</link>
<description><![CDATA[<b>Oct 23, 2018</b> by Robert Gravelle<br/><br/><p>MongoDB provides a User Management Interface for performing a wide variety of user-related tasks. In addition to adding new users, the User Management Interface also allows database administrators (DBAs) to update existing users, such as to change password and grant or revoke roles. In today's blog, we'll explore how to create a new user using Navicat for MongoDB's User &amp; Role Management facilities.</p><h1 class="blog-sub-title">How MongoDB Stores User Data</h1><p>It's important to know what happens when you create a new user in MongoDB. The user's data is inserted in a specific database called the authentication database. Moreover, MongoDB stores all user information, including name, password, and the user's authentication database, in the system.users collection in the admin database.</p><p>The user's name and authentication database together serve as a unique identifier for that user. Therefore, if two users have the same name but are created in different databases, they are considered to be two separate users for all intensive purposes. Hence, if you intend to have a single user with permissions on multiple databases, you should create a single user with roles in the applicable databases instead of creating the user multiple times in different databases.</p><p>Regarding privileges, these are not limited to the user's authentication database, but can extend across different databases. By assigning to the user roles in other databases, a user created in one database can have permissions to act on these databases.</p><h1 class="blog-sub-title">Creating a New User</h1><p>DBAs should not access the system.users collection directly, but instead use MongoDB's user management commands. Creating a user is accomplished using the db.createUser() method or createUser command.</p><p>Here's an operation that creates a user in the employees database and assigned his/her name, password, and roles:</p><font face="monospace">use employees<br/>db.createUser(<br/>&nbsp;&nbsp;{<br/>&nbsp;&nbsp;&nbsp;&nbsp;user: "tsmith",<br/>&nbsp;&nbsp;&nbsp;&nbsp;pwd: "ascend99",<br/>&nbsp;&nbsp;&nbsp;&nbsp;roles: [<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{ role: "read", db: "employees" },<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{ role: "read", db: "products" },<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{ role: "read", db: "sales" },<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{ role: "readWrite", db: "accounts" }<br/>&nbsp;&nbsp;&nbsp;&nbsp;]<br/>&nbsp;&nbsp;}<br/>)</font><p>Navicat provides the powerful User Designer tool for managing server user accounts and their associated privileges. It stores all users' information and privileges on the server, because it employs MongoDB's native commands behind the scenes. You'll find the User or Role command in the main window toolbar. Click the button to open the user/role object list:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181023/navicat_user_command.jpg" style="max-width: 100%;"></td></tr><p style="font-size: 18px;">The User Designer Tool at a Glance</p><p>Choosing the User item from the user/role object list opens a new Objects toolbar with User-related commands:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181023/navicat_user_toolbar.jpg" style="max-width: 100%;"></td></tr><p>To create a new user, click the New User button. That will open the User Designer tool:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181023/navicat_user_designer.jpg" style="max-width: 100%;"></td></tr><p>The User Designer is broken up into several tabs as follows, from left to right:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>General:<br/>The user name, password, encryption specification.</li><li>Custom Data:<br/>In this tab, you can enter any information associated with this user.</li><li>Built-In Roles:<br/>In the list, assign this user to be a member of the selected built-in roles.</li><li>User-Defined Roles:<br/>In the list, assign this user to be a member of the selected user-defined roles.</li><li>Authentication Restrictions:<br/>To edit specific authentication restrictions that the server enforces on the user.</li><li>Script Preview:<br/>Displays the native MongoDB command(s) that will be executed.</li></ul><p>To add the above user in Navicat:</p><p>On the General Properties tab:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Enter the User Name.</li><li>Specify a login password for the user.</li><li>Re-type the login password in the Confirm Password field.</li><li>Next, on the Built-in roles tab, we would select the following roles:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181023/user_roles.jpg" style="max-width: 100%;"></td></tr></li><li>You can preview the generated command on the Preview tab:<br/><br/><font face="monospace">db.createUser({<br/>    &nbsp;&nbsp;&nbsp;&nbsp;user: "tsmith",<br/>    &nbsp;&nbsp;&nbsp;&nbsp;pwd: "ascend99",<br/>    &nbsp;&nbsp;&nbsp;&nbsp;roles: [<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;role: "read",<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;db: "sales"<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;},<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;role: "readWrite",<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;db: "accounts"<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;},<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;role: "read",<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;db: "employees"<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;},<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;role: "read",<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;db: "products"<br/>        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br/>    &nbsp;&nbsp;&nbsp;&nbsp;],<br/>  &nbsp;&nbsp;&nbsp;&nbsp;authenticationRestrictions: [ ]<br/>})<br/></font></li><li>Click the Save button to create the new user.</li></ul>]]></description>
</item>
<item>
<title>Sorting Documents in MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/913-sorting-documents-in-mongodb.html</link>
<description><![CDATA[<b>Oct 18, 2018</b> by Robert Gravelle<br/><br/><p>Sorting a list of English words is simple enough because they rely on alphabetical ordering. Sorting a set of German, or French words, with all of their accents, or Chinese with their different characters is a lot harder. Sorting rules are specified through locales, which determine how accents are sorted, in which order the characters are in, and how to do case-insensitive sorting.</p><p>In the last couple of blogs we learned how to specify collation rules for a collection or view in Navicat for MongoDB. Today we're going to see collation rules in action by sorting two collections with the same data but defined using different collation rules. </p><h1 class="blog-sub-title">The Test Data</h1><p>For our test we'll use the following three words: 'boffey', 'bhm', and 'brown'. Using the American English (en_US) locale, they will be sorted as:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>boffey</li><li>bhm</li><li>brown</li></ul><p>Meanwhile, sorting according to the nb (Norwegian) locale, will reverse 'brown' and 'bhm':</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>boffey</li><li>brown</li><li>bhm</li></ul><h1 class="blog-sub-title">Creating the Collections</h1><p>In Navicat, selecting your database in the Objects pane will display the Objects toolbar with the New Collection button enabled:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181018/new_collection_button.jpg" style="max-width: 100%"></td></tr><p>Clicking it will bring up a new Untitled Collection tab, along with its own toolbar. Click on the Collation tab to set the collation rules. For our purposes, all you need to do is select the "en_US" item from the Locale dropdown and hit the Save button. That'll bring up a dialog where you can provide a name for the collection. Call this one "sort_en_us":</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181018/collection_name_dialog.jpg" style="max-width: 100%"></td></tr><p>Upon saving the collection, the remaining collation rules will change to their defaults:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181018/en_us_collation_settings.jpg" style="max-width: 100%"></td></tr><p>Now we're ready to add the documents.</p><p>Double click our new collection in the Objects pane to bring up the data. To enter a new document, click on the button with the Plus sign in the bottom-left corner:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181018/add_document_button.jpg" style="max-width: 100%"></td></tr><p>That will display the Add Document dialog. There, you can provide the "name" field and "bhm" value:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181018/add_document_dialog.jpg" style="max-width: 100%"></td></tr><p>Clicking the Add button appends your new document to the collection.</p><p>Repeat that process again to enter the "boffey" and "brown" values.</p><p>Next, create another collection named "sort_norwegian". This time, choose "nb" from the Locale dropdown. Be sure to enter the data in the same order so that both our datasets are identical.</p><h1 class="blog-sub-title">Sorting the Collections</h1><p>With our two test collections in place, we're ready to sort them.</p><p>To do that, open the sort_en_us collection and click the Sort button on the toolbar. That will open a new pane above the data where you can define the sort criteria. To add a sort field, click on the Plus sign button. The _id field will be set by default. To change it, click the field name and choose the name field from the list. Finally, apply the sort criteria by clicking the check mark button. Your data should now look as follows:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181018/en_us_sorting.jpg" style="max-width: 100%"></td></tr><p>Do the same for the sort_norwegian collection and notice the different results:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181018/nb_sorting.jpg" style="max-width: 100%"></td></tr><p>And that, dear readers, is collation at work!</p>]]></description>
</item>
<item>
<title>Specifying Collation in MongoDB (Part 2)</title>
<link>https://www.navicat.com/company/aboutus/blog/912-specifying-collation-in-mongodb-part-2.html</link>
<description><![CDATA[<b>Oct 9, 2018</b> by Robert Gravelle<br/><br/><p>In this series on Collation support in MongoDB, we've been learning how to specify collation in MongoDB using the <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">Navicat for MongoDB</a> GUI administration and development tool. <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/877-specifying-collation-in-mongodb-part-1.html" target="_blank">Part I</a> provided a brief introduction to the concept of collation, covered the fields that govern collation in MongoDB, as well as got into some of the specifics of the first three fields, namely Locale, Case Level, and Case First. Today's blog will describe the rest of the fields.</p><h1 class="blog-sub-title">Strength</h1><p>Our next field, Strength, ascribes the level of comparison to perform.</p><p>Possible values include:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Primary: Collation performs comparisons of the base characters only, ignoring other differences such as accents and case. Hence, , , and a would all be treated as the same character.</li><li>Secondary: Collation performs comparisons up to secondary differences, such as accents. That is, base characters + accents. Note that differences between base characters takes precedence over secondary differences.</li><li>Tertiary: Collation performs comparisons up to tertiary differences, such as case and letter variants. That is, collation performs comparisons of base characters, accents, as well as case and variants. Although English only has case variants, some languages have different but equivalent characters, i.e simplified vs. traditional Chinese. At this level, differences between base characters takes precedence over accents, which takes precedence over case and variant differences.</li><p><b>This is the default level.</b></p><li>Quaternary: Limited to a specific use case to consider punctuation when levels 1 to 3 ignore punctuation or for processing Japanese text.</li><li>Identical: Limited for the specific use case of tie breaker.</li></ul><p>In Navicat, you'll find all of the above values conveniently located in a dropdown list:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181009/strength_field.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Numeric Ordering</h1><p>This is a flag that determines whether to compare numeric strings as numbers or as strings:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>If <i>on</i>, compare as numbers; i.e. "10" is greater than "2".</li><li>If <i>off</i>, compare as strings; i.e. "10" is less than "2".</li></ul><p>The default is false.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181009/numeric_ordering_field.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Alternate</h1><p>This is another simple but powerful field that determines whether collation should consider whitespace and punctuation as base characters for purposes of comparison.</p><p>It has only 2 possible values:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li><i>non-ignorable</i>: Whitespace and punctuation are considered base characters.</li><li><i>shifted</i>: Whitespace and punctuation are not considered base characters and are only distinguished at strength levels greater than 3.</li></ul><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181009/alternate_field.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Max Variable</h1><p>This field determines up to which characters are considered ignorable when Alternate is set to <i>shifted</i>. It has no effect when Alternate is set to <i>non-ignorable</i>.<p>It has only 2 possible values:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li><i>punct</i>: Both whitespaces and punctuation are "ignorable", i.e. not considered base characters.</li><li><i>space</i>: Only whitespace are "ignorable", i.e. not considered base characters.</li></ul><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181009/max_variable_field.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Backwards</h1><p>Here's another flag. This one determines whether strings with accents sort from the back of the string, such as with some French dictionary ordering.</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li>If <i>on</i>, compare from back to front.</li><li>If <i>off</i>, compare from front to back.</li></ul><p>The default value is false.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181009/backwards_field.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Normalization</h1><p>Our final field is a flag that determines whether to check if text requires normalization and to perform normalization if it does. Generally, the majority of text does not require normalization processing.</p><ul style="list-style-type: none; margin-left: 48px; line-height: 20px;"><li>If <i>on</i>, check if fully normalized and perform normalization to compare text.</li><li>If <i>off</i>, does not check.</li></ul><p>The default value is <i>off</i>.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181009/normalization_field.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>Now that we've covered all of the Collation fields, in a future blog, we'll learn how to apply collation rules to your sorting operations in MongoDB.</p>]]></description>
</item>
<item>
<title>Specifying Collation in MongoDB (Part 1)</title>
<link>https://www.navicat.com/company/aboutus/blog/877-specifying-collation-in-mongodb-part-1.html</link>
<description><![CDATA[<b>Oct 3, 2018</b> by Robert Gravelle<br/><br/><p>Collation involves a set of language-specific rules for string comparison, such as those for lettercase and accent marks. Your run of the mill sorting is fine for simple entries made up of alphanumeric characters, but once you include special characters, such as @, #, $, % (etc) and , , ,  (etc, etc), it becomes imperative that you specify your own collation.</p><p>MongoDB added collation support in version 3.4, so that you can specify collation for a collection or a view, an index, or certain operations that support collation, such as find() and aggregate().</p><p>Today's blog will provide a brief introduction to the concept of collation, cover the fields that govern collation in MongoDB, as well as how to specify collation in MongoDB using the Navicat for MongoDB GUI administration and development tool. Moreover, we'll get into the specifics of the first three fields today, while the rest will be described in part 2.</p><h1 class="blog-sub-title">Collation Document Fields</h1><p>To use collation options other than the default, you can specify a Collation Document. It's made up of the following fields:</p><font face="monospace">{<br/>&nbsp;&nbsp;&nbsp;locale: &lt;string&gt;,<br/>&nbsp;&nbsp;&nbsp;caseLevel: &lt;boolean&gt;,<br/>&nbsp;&nbsp;&nbsp;caseFirst: &lt;string&gt;,<br/>&nbsp;&nbsp;&nbsp;strength: &lt;int&gt;,<br/>&nbsp;&nbsp;&nbsp;numericOrdering: &lt;boolean&gt;,<br/>&nbsp;&nbsp;&nbsp;alternate: &lt;string&gt;,<br/>&nbsp;&nbsp;&nbsp;maxVariable: &lt;string&gt;,<br/>&nbsp;&nbsp;&nbsp;backwards: &lt;boolean&gt;<br/>}</font><p>You can see the same fields represented in Navicat on the Collation tab:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20181003/collation_fields.jpg" style="max-width: 100%"></td></tr><p>Of all the above fields, only the locale field is mandatory; all of the other collation fields are optional.</p><p>Now let's take a closer look at each field and get a better idea what values are permissible to each:</p><ul style="list-style-type: disc; margin-left: 24px; line-height: 20px;"><li><p style="font-size: 20px"><b>Locale:</b></p><p>A locale identifies a specific user community, i.e, a group of individuals who share a similar culture and language idioms. In practice, a community is the intersection of all people speaking the same language and living in the same country. For example, the French locale for France is distinct from the French locale of Canada. Therefore, "fr" is the locale code for France French, while "fr_CA" adds the 2 character Country code for Canada. While the two locales will have many similarities, there will be some differences, such as currency, which is the Euro () in France and the Dollar ($) in Canada.</p><p>As you might imagine, there are numerous locales. The Locale dropdown contains many of the more common ones. The first item in the list, "simple", specifies a simple binary comparison. You can also enter your own in the textbox portion of the dropdown.</p><p style="font-size: 18px"><b>Sorting Differences Between Languages</b></p><p>With regards to sorting, every language has its own sort order, and sometimes even multiple sort orders. Here's how the same names would be sorted under different locales:</p><ul style="list-style-type: circle; margin-left: 30px; line-height: 20px"><li>English (en): bailey, boffey, bhm, brown</li><li>German (de_DE): bailey, boffey, bhm, brown</li><li>German phonebook (de-DE_phonebook): bailey, bhm, boffey, brown</li><li>Swedish (sv_SE): bailey, boffey, brown, bhm</li></ul></li><li><p style="font-size: 20px"><b>Case Level:</b></p><p>A flag that determines whether to include case comparison.</p><ul style="list-style-type: circle; margin-left: 30px; line-height: 20px"><li>If "on", include case comparison.</li><li>If "off", do not include case comparison.</li></ul></li><li><p style="font-size: 20px"><b>Case First:</b></p><p>A field that determines sort order of case differences. Values include:</p><ul style="list-style-type: circle; margin-left: 30px; line-height: 20px"><li>"upper": Uppercase sorts before lowercase.</li><li>"lower": Lowercase sorts before uppercase.</li><li>"off":Default value. Similar to "lower", but with slight differences.</li></ul></li></ul><h1 class="blog-sub-title">Conclusion</h1><p>In today's blog, we were introduced to the concept of collation, covered the fields that govern collation in MongoDB, and learned how to specify collation for MongoDB using Navicat for MongoDB. Having familiarized ourselves with the first three Collation Document fields, we'll move on to the last five fields in part 2.</p>]]></description>
</item>
<item>
<title>Introduction to Views in MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/876-introduction-to-views-in-mongodb.html</link>
<description><![CDATA[<b>Sep 18, 2018</b> by Robert Gravelle<br/><br/><p>In relational databases, a view is a searchable data subset that is defined by a query. Views are sometimes referred to as "virtual tables" because they don't store data, but can be queried just like tables. MongoDB recently introduced views in version 3.4. In today's blog, we'll learn how to create a view in MongoDB using Navicat for MongoDB GUI administration and development tool.</p><h1 class="blog-sub-title">Opening the View Object List</h1><p>There are 2 ways to open the view object list from the Navicat main window:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>Click the View command button from the main toolbar.</li><li>Select the Views object in the Database Objects tree.<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180918/view_command_button.jpg" style="max-width: 100%"></td></tr></li></ul><h1 class="blog-sub-title">The Navicat View Designer</h1><p>The View Designer is a specialized Navicat for MongoDB tool for creating and editing your views. It's accessible via the New View button from the Objects Tab toolbar:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180918/new_view_button.jpg" style="max-width: 100%"></td></tr><p>You can also right-click (Ctrl+Click on macOS) the the Views object in the Database Objects tree and select New View from the pop-up menu:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180918/new_view_menu_item.jpg" style="max-width: 100%"></td></tr><p><i>TIP: You can create a view shortcut by right-clicking (Ctrl+Click on macOS) a view in the Objects tab and select Create Open View Shortcut from the pop-up menu. This option is used to provide a convenient way for you to open your view directly in the Navicat main window.</i></p><p style="font-size: 18px;">Creating a View</p><p>When you create a view in MongoDB, the engine runs an aggregation. Hence, creating a view requires that we specify a collection or an existing view.</p><p>We'll create a view that only shows actors' full names.</p><p>On the General tab, choose the actor collection from the Collection/View dropdown list.</p><p>Now click on the Pipeline tab. It contains a dropdown list of Operators along with an Expression text field.</p><p>MongoDB features a number of Operators for constructing expressions for use in the aggregation pipeline stages that shape your view. Operator expressions are similar to functions that take arguments. In general, these expressions take an array of arguments and have the following form:</p><font face="monospace">{ &lt;operator&gt;: [ &lt;argument1&gt;, &lt;argument2&gt; ... ] }</font><p>The Operator we need to select from the list is $project. It passes along the documents with the requested fields to the next stage in the pipeline. The specified fields can be existing fields from the input document or newly computed fields.</p><p>Here is an Expression that suppresses the _id field and concatenates the first_name and last_name fields from the actor collection.</p><font face="monospace">{ _id: 0, full_name : { $concat: ["$first_name", ", ", "$last_name"] } }</font><p>You can view the code generated by Navicat by clicking on the Script Preview tab:</p><font face="monospace">db.createView("Untitled","actor",[<br/>&nbsp;&nbsp;&nbsp;&nbsp;{<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$project: {<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;_id: 0,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"full_name": {<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$concat: [<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"$first_name",<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;", ",<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"$last_name"<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;]<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br/>&nbsp;&nbsp;&nbsp;&nbsp;}<br/>])<br/></font><p>To see your new view, click on the Preview button or Result tab:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180918/view_result.jpg" style="max-width: 100%"></td></tr><p>Upon saving a view, Navicat executes the above db.createView command. The aggregation pipeline is saved in the system.views collection. A new document is also saved in the system.views collection for each view created.</p><h1 class="blog-sub-title">Going Forward</h1><p>Now that we've got the basics down, in the next blog, we'll learn about Collation.</p>]]></description>
</item>
<item>
<title>Analyzing MongoDB Schemas and Data</title>
<link>https://www.navicat.com/company/aboutus/blog/869-analyzing-mongodb-schemas-and-data.html</link>
<description><![CDATA[<b>Sep 11, 2018</b> by Robert Gravelle<br/><br/><p>Schema Analysis is useful in verifying your schemas, visualizing data distributions and for identifying data outliers. Available only for MongoDB, the Navicat for MongoDB Collection and Data Viewer toolbars include command buttons for analyzing your collection schema and document data.</p><p>In today's blog, we'll be exploring <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">Navicat for MongoDB</a>'s analysis tool.</p><h1 class="blog-sub-title">Schema Analysis</h1><p>In the Non-Essentials Edition of Navicat for MongoDB, selecting a Collection or a View in the Object tree enables the Analyze Schema button on the toolbar:</p><figure><figcaption>Analyze Schema with Collection Selected</figcaption><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/analyze_schema_button.jpg" style="max-width: 100%"></td></tr></figure><br/><figure><figcaption>Analyze Schema with View Selected</figcaption><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/analyze_schema_button_2.jpg" style="max-width: 100%"></td></tr></figure><p>Clicking the Analyze Schema button brings up the Analyze screen in a new tab:</p><figure><figcaption>Analyze Schema Screen</figcaption><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/analyze_schema_screen.jpg" style="max-width: 100%"></td></tr></figure><p>The Analyze screen contains a number of options for fine tuning your analysis. These include:</p><ul style="list-style-type: disc;  margin-left: 24px; line-height: 20px;"><li>Filter: acts much like the WHERE clause of a SELECT query and is useful for narrowing down the data that is analyzed.<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/filter.jpg" style="max-width: 100%"></td></tr></li><br/><li>Projection: Allows us to select which fields to include in the analysis. Fields may be ordered using the arrow buttons below the field list.<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/projection.jpg" style="max-width: 100%"></td></tr></li><br/><li>Analyze: There are three analysis parameters which may be set to configure exactly what data will be analyzed. The three fields include:<br/><br/><ul style="list-style-type: circle; margin-left: 30px; line-height: 20px;"><li>A dropdown containing four items: All, First, Last, and Random.</li><li>A textbox for entering a number.</li><li>A dropdown containing two items: Documents and Percent.</li>The three fields may be combined to specify a virtually unlimited variety of combinations, such as:<br/><li>the first 100 documents</li><li>the last 50 documents</li><li>a random 250 documents</li><li>the first 50 percent of documents</li><li>the last 20 percent of documents</li><li>a random 80 percent of documents</li></ul></li></ul><figure><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/analyze_parameters.jpg" style="max-width: 100%"></td></tr></figure><p>After the analysis has completed, you will see the schema analysis results. The results display visual information about the type and data distribution of selected fields. Here's an analysis that presents the top 15 first and last names of actors within a collection:</p><figure><figcaption>Analyze Schema Results</figcaption><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/analyze_schema_results.jpg" style="max-width: 100%"></td></tr></figure><p>You can bring up the exact percentage of documents contain a specific value by hovering the mousepointer over the bar in the chart:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/hovering_over_bar.jpg" style="max-width: 100%"></td></tr><p>Different chart types are employed depending on the nature and distribution of the underlying data. Here's a population field presented as a Ring chart:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/pop_dist.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Analyzing Document Data</h1><p>The Collection tab toolbar contains an Analyze button to analyze that document's data. It works in much the same way as the Analyze Schema button in that it displays a new tab with the analysis options for fine tuning the analysis.</p><figure><figcaption>Analyze Button on Collection tab toolbar</figcaption><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/analyze_button.jpg" style="max-width: 100%"></td></tr></figure><p>Here are the results of an analysis that confirms that a collection of movie categories is evenly distributed:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/analyze_results.jpg" style="max-width: 100%"></td></tr><p>We can easily assess that documents are evenly distributed by the uniform height of the vertical bars. Moreover, hovering over each bar shows that they each make up exactly 6.25% of the collection:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180911/categories_distrib.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>Schema Analysis has many practical applications, from verifying your schemas, visualizing data distributions and for identifying data outliers. It's built into <a class="default-links" href="https://navicat.com/en/products/navicat-for-mongodb" target="_blank">Navicat for MongoDB</a> - Non-Essentials Edition. Give it a try!</p>]]></description>
</item>
<item>
<title>Navicat for MongoDB Grid View Features- Expanding Array Values, Colorizing Cells, and Migrating Data (Part 2)</title>
<link>https://www.navicat.com/company/aboutus/blog/823-navicat-for-mongodb-grid-view-features-expanding-array-values,-colorizing-cells,-and-migrating-data-part-2.html</link>
<description><![CDATA[<b>Sep 4, 2018</b> by Robert Gravelle<br/><br/><p>In the last couple of blogs, we learned how each of Navicat for MongoDB's Collection views - Grid, Tree, and JSON - provide a different set of command buttons for performing operations that are tailored to that particular view. In the last blog, we learned about transactions, filtering, and sorting. In today's blog, we'll be covering how to expand array values, colorize cells, and migrating data between MongoDB and other databases.</p><h1 class="blog-sub-title">Expanding and Collapsing Array Values</h1><p>In Grid View, arrays are depicted as "(Array) [N] Elements". However, that does not mean that we cannot view their contents. Placing the cursor on a cell that contains an array causes the Expand - [&lt;|&gt;] - button to appear on the right-hand side of the cell. Clicking it then drills down into the array so that we may view its elements. This process would continue until the innermost array is displayed. We may then return to the top (i.e. collection) level by clicking on the [&gt;|&lt;] Collapse All button at any time:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180904/expand_and_collapse.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Highlighting Cells based on Data Type</h1><p>Only present on the Grid View Collection Tab toolbar, color highlighting makes a cell's data type easy to identify. The Type Color button on the toolbar (as well as the the Enable Coloring option checkbox on the Type Color pane) applies the colors specified on the Type Color pane to highlight cells based on their data types.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180904/type_colors.jpg" style="max-width: 100%"></td></tr><p>If the grid window is docked to the Navicat main window, you can click the Type Color icon in the Information pane to show the color mapping fields:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180904/type_color_icon.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Importing and Exporting a Collection</h1><p>In all View types, the Collection Tab toolbar contains Import and Export buttons. These buttons perform very much the same function as the Import and Export items in the File menu. We won't be going ever every step of the import and export process here, but there are a couple of points to be aware of:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>On the screen that lets you set the target collection, you can either choose an existing collection or a new one. To create a new collection, just enter the collection name in the Target Collection field. That will automatically check the New Collection box:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180904/import_wizard.jpg" style="max-width: 100%"></td></tr><br/><br/>New collections will include the MongoDB _id field:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180904/actor_collection.jpg" style="max-width: 100%"></td></tr></li><br/><li>On the Field Mappings screen, you can select which fields you want to import as well as their data type. It shows the Source and Target fields so that you may compare the effect on the data type on the latter. You may also designate which fields make up the primary key field:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180904/import_wizard_field_types.jpg" style="max-width: 100%"></td></tr></li></ul><p>With regards to the Export process:</p><ul style="list-style-type: decimal; margin-left: 24px; line-height: 20px;"><li>When exporting large collections, a dialog lets you select whether to export the entire collection or only the currently displayed rows:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180904/export_confirm.jpg" style="max-width: 100%"></td></tr></li><br/><li>On the third screen of the Export Wizard, you may want to remove the MongoDB _id field if you are planning on importing the data into another database type. Of course, if you are using a Navicat product to perform the import, it will allow you to ignore that field at that time:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180904/export_remove_mongodb_id_field.jpg" style="max-width: 100%"></td></tr></li><br/><li>The fourth screen of the Export Wizard lets you specify whether or not to include column titles as the first row of the exported data. It is recommended that you do check the "Include column titles" box because it tends to make importing the data easier:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180904/export_include_column_titles.jpg" style="max-width: 100%"></td></tr></li></ul><p>In the next blog, we'll learn about the last button on the Grid View Collection Tab toolbar: the Analyze feature.</p>]]></description>
</item>
<item>
<title>Navicat for MongoDB Grid View Commands (Part 1)</title>
<link>https://www.navicat.com/company/aboutus/blog/784-navicat-for-mongodb-grid-view-commands-part-1.html</link>
<description><![CDATA[<b>Aug 28, 2018</b> by Robert Gravelle<br/><br/><p>In the last couple of blogs, we have covered how the Navicat for MongoDB database administration tool makes working with documents and collections easier. For instance, documents can be presented in one of three ways: in Grid view, Tree view, or JSON view. But that's just the tip of the iceberg. The Collection Tab toolbar includes a number of commands for each of the three View types. In today's blog, we'll take a closer look at a few of the Grid View Toolbar Commands of the Grid View.</p><h1 class="blog-sub-title">The Grid View Toolbar at a Glance</h1><p>When a Collection Tab is set to Grid View, you'll notice nine command buttons on the tab toolbar:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/grid_view_toolbar.jpg" style="max-width: 100%"></td></tr><p>Let's go over some of these from left to right:</p><ul style="list-style-type: decimal; margin-left: 24px;"><li>Begin Transaction: As the name implies, the Begin Transaction button starts a transaction. If 'Auto begin transaction' is enabled in Options, transaction will be started automatically when opening the data viewer. After the Begin Transaction button is clicked, its label changes to 'Commit' and the 'Rollback' button is added as well:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/commit_and_rollback_buttons.jpg" style="max-width: 100%"></td></tr><br/>Clicking 'Commit' makes permanent all changes performed in the current transaction, while clicking 'Rollback' undoes the work done within the current transaction.<br/></li><br/><li>Assistant Editor: Navicat provides powerful assistant editors to view and edit Text, Hex, Image, and Web. The editor allows you to view, update, insert, or delete data in a table or a collection. Select Text, Hex, Image, or Web from the toolbar to activate the appropriate viewer/editor:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/dynamic_editors_list.jpg"></td></tr><br/>For example, selecting "Hex" from the list displays the hex editor at the bottom of the tab:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/hex_editor.jpg" style="max-width: 100%"></td></tr></li><br/><li>Filter Wizard: The Filter Wizard allows you to create and apply filter criteria to the data grid, much in the same way that a WHERE clause does to query results.<br/><br/><ol style="list-style-type: lower-roman; margin-left: 30px;"><li>Click 'Filter' from the toolbar to activate the filter.</li><br/><li>To add a new condition to the criteria, click the [+] button. You may also add a condition with parentheses by clicking on [()+]:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/add_filter_buttons.jpg" style="max-width: 100%"></td></tr></li><li>Click on the field name (next to the checkbox) and choose a field from the list:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/filter_field_list.jpg" style="max-width: 100%"></td></tr></li>          <br/><li>Click on the operator (next to the field name) to choose a filter operator. You can choose '[Custom]' from the list to enter the condition manually:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/filter_operator_list.jpg" style="max-width: 100%"></td></tr></li><li>Click on '&lt;?&gt;' to activate the Criteria Editor dialog and enter the criteria value(s). The data type used in the criteria values box is determined by the data type assigned to the corresponding field. You can also change the field type via the Field Types list:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/value_type_list.jpg" style="max-width: 100%"></td></tr></li>          <br/><li>Repeat step i-v to add another new condition.</li><br/><li>To add parentheses to existing conditions, right-click on the selected conditions and select 'Group with Bracket' from the context menu. To remove the parentheses, right-click a bracket and select 'Delete Bracket or 'Delete Bracket and Conditions':<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/delete_bracket_menu_item.jpg" style="max-width: 100%"></td></tr></li>          <br/><li>With multiple criteria, clicking on the logical operator (next to the criteria values) toggles between "and" and "or".</li><br/><li>Click on the Apply Filter button to see the result of the filtering expression:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/apply_filter_button.jpg" style="max-width: 100%"></td></tr></li>          <br/><li>To save your filter criteria as a profile for future use, right-click anywhere in the filter editor and select 'Save Profile' or 'Save Profile As' from the context menu:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/save_profile.jpg" style="max-width: 100%"></td></tr></li></ol></li><br/><li>Sort Documents: MongoDB stores documents in the order in which they were added to the collection. Sorting in Navicat is used to temporarily rearrange documents, so that you can view or update them in a different sequence.<br/><br/>One way to sort by one field is to click the Sort arrow on the right side of the field header and select 'Sort Ascending' or 'Sort Descending' from the context menu:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/sort_menu.jpg" style="max-width: 100%"></td></tr><br/>To sort by multiple fields, click the 'Sort' button from the toolbar. In the Sort Editor, you can enter any number of fields and set the sort order for each:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180828/sort_fields.jpg" style="max-width: 100%"></td></tr></li></ul><p>That's just a few of the Collection Tab toolbar commands. We'll explore the remaining Toolbar Commands in the next blog.</p>]]></description>
</item>
<item>
<title>Working with Documents in Navicat for MongoDB</title>
<link>https://www.navicat.com/company/aboutus/blog/769-working-with-documents-in-navicat-for-mongodb.html</link>
<description><![CDATA[<b>Aug 21, 2018</b> by Robert Gravelle<br/><br/><p>MongoDB is a NoSQL database that stores data as collections of documents. Therefore, it behooves you to learn how to work with both documents and collections. In the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/768-mongodb-documents-tutorial.html" target="_blank">MongoDB Documents Tutorial</a> we learned how documents are stored in MongoDB as well as how to append new ones to a collection using the Navicat for MongoDB database administration tool. In today's blog, we'll be covering how to view, delete, and edit documents.</p><h1 class="blog-sub-title">One Document: 3 Views</h1><p>In Navicat for MongoDB, data can be presented in one of three ways, depending on what you're trying to do with the documents. They are:</p><ul style="list-style-type: decimal;"><li>Grid view</li><br/><li>Tree view</li><br/><li>JSON view</li></ul><p>If you look at the lower-right quadrant of the Collection Tab, you'll see the three buttons for each view:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/doc_view_buttons.jpg" style="max-width: 100%"></td></tr><p>They are also accessible via the View command from the Main Menu:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/doc_view_menu_items.jpg" style="max-width: 100%"></td></tr><p>Grid View (pictured above) is the traditional tabular display that DBAs are most familiar with. It can handle any documents size, and supports advanced features like highlighting cells based on data types, column hiding and more.</p><p>Tree View shows your documents in a hierarchical view. All embedded documents and arrays are represented as nodes, which can be expanded or collapsed as needed:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/tree_view.jpg" style="max-width: 100%"></td></tr><p>You can also show your data as JSON documents, while documents can be added with the built-in validation mechanism which ensures your edits are correct.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/json_view.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Adding and Deleting Documents in Grid View</h1><p>The previous section described the three View buttons in the lower-right quadrant of the Collection Tab. Now, we'll turn our attention to the lower-left quadrant of the Collection Tab. There, you'll find the Add Document and Delete Document buttons:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/add_delete_buttons.jpg" style="max-width: 100%"></td></tr><p>Clicking the Add Document button appends an empty row to the end of the grid. You can enter values directly into each cell. The TAB key moves the cursor to the adjacent cell on the right, while SHIFT+TAB moves it one cell to the left.</p><p>Clicking the Apply Changes button saves the new document.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/apply_and_discard_changes.jpg" style="max-width: 100%"></td></tr><p>Meanwhile, clicking the Discard Changes button removes the new document without saving it to the database.</p><p>You can edit an existing document via the Edit Document... command on the context menu:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/edit_document_menu_item.jpg" style="max-width: 100%"></td></tr><p>That brings up the selected document in JSON format for in-place editing:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/edit_document_dialog.jpg" style="max-width: 100%"></td></tr><p>You can validate the document at any time by clicking the Validate button. Either way, clicking the Update button will validate the document automatically before committing your changes.</p><h1 class="blog-sub-title">Adding and Deleting Documents in Tree View</h1><p>Clicking the Add Document button in Tree View causes an empty document to open in the editor. Clicking to the right of a field label displays a textbox to enter the value:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/adding_a_doc_in_tree_view.jpg" style="max-width: 100%"></td></tr><p>Deleting a document in Tree View removes the current document from the database and displays the previous one in the collection.</p><h1 class="blog-sub-title">Adding and Deleting Documents in JSON View</h1><p>Clicking the Add Document button in JSON View causes the Add Document dialog to appear with an empty document. There, you can enter all of the document fields as free-form text:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180821/adding_a_doc_in_json_view.jpg" style="max-width: 100%"></td></tr><p>Clicking the Delete Document button in JSON View removes the enclosing document around the cursor. A prompt will appear to ask you to confirm the identity of the document.</p><h1 class="blog-sub-title">Going Forward</h1><p>In upcoming blogs, we'll learn about collection sorting, working with different field types, and filtering documents based on multiple criteria.</p>]]></description>
</item>
<item>
<title>MongoDB Documents Tutorial</title>
<link>https://www.navicat.com/company/aboutus/blog/768-mongodb-documents-tutorial.html</link>
<description><![CDATA[<b>Aug 14, 2018</b> by Robert Gravelle<br/><br/><p>The massive volumes data generated by modern interconnected systems and devices has spawned a new kind of database known as NoSQL. Perhaps the best known of this new breed of non-relational database is MongoDB. Unlike traditional relational databases (RDBMSes), MongoDB does not contain tables. Instead, it stores data as collections of documents.</p><p>In the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/767-working-with-nosql-databases" target="_blank">Working with NoSQL Databases</a> blog, we learned how to create a new database and collection using the Navicat for MongoDB database management & design tool. In today's follow-up, we'll learn about MongoDB documents and add some to our collection.</p><h1 class="blog-sub-title">Comparing MongoDB and RDBMS Objects</h1><p>While MongoDB shares some of the same terms as those of traditional RDBMSes, others are unique to NoSQL databases. To help clarify, here's a table that compares RDBMS terminology to that of MongoDB:</p><head><style>table, th, td {    border: 1px solid black;    border-collapse: collapse;}th, td {    text-align: center;}</style></head><body><table border="1" style="width: 600px; line-height: 25px;"><tr bgcolor="lightgray"><th height="25">RDBMS</th><th height="25">MongoDB</th></tr><tr><td height="25">Database</td><td height="25">Database</td></tr><tr><td height="25">Table</td><td height="25">Collection</td></tr><tr><td height="25">Tuple/Row</td><td height="25">Document</td></tr><tr><td height="25">column</td><td height="25">Field</td></tr><tr><td height="25">Table Join</td><td height="25">Embedded Documents</td></tr><tr><td height="25">Primary Key</td><td height="25">Primary Key (Default key _id is provided by mongodb)</td></tr></table></body><h1 class="blog-sub-title">MongoDB Documents Explained</h1><p>MongoDB stores data as <a class="default-links" href="http://bsonspec.org/" target="_blank">BSON</a> documents. BSON is a binary representation of JSON documents, though it contains additional data types, in addition to JSON. MongoDB documents are composed of field:value pairs and have the following structure:</p><pre class="brush:javascript">{   field1: value1,   field2: value2,   field3: value3,   ...   fieldN: valueN}</pre><p>The value of a field can be any valid BSON data type, including other documents, arrays, and arrays of documents. Here's and example of a document that contains information about an American city.  Notice the different data types:</p><pre class="brush:javascript">// 1{    "_id": "01005",    "city": "BARRE",    "loc": [        -72.108354,        42.409698    ],    "pop": NumberInt("4546"),    "state": "MA"}// 2{    "_id": "01012",    "city": "CHESTERFIELD",    "loc": [        -72.833309,        42.38167    ],    "pop": NumberInt("177"),    "state": "MA"}// 3//etc...</pre><h1 class="blog-sub-title">Creating a New Document in Navicat for MongoDB</h1><p>In the last blog, we created a database named "my_mongo_db" and collection named "my_first_collection". Now, we'll add some data to the collection in the form of documents.</p><ul style="list-style-type: decimal;"><li>The first step is to open the collection that we wish to add the document to. Select the "my_first_collection" object in the Object pane and click the Open Collection button on the Objects toolbar:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180814/open_collection_button.jpg" style="max-width: 100%"></td></tr><br/>That will open the collection in a new tab.</li><br/><li>You'll find the Document operations at the bottom of the tab. Click the Plus sign to add a document:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180814/add_document_button.jpg" style="max-width: 100%"></td></tr></li><br/><li>In the Add Document dialog, enter the following fields, which are similar to those of the document samples above:<br/><pre class="brush:javascript">{    "_id": "01005",    "city": "BARRE",    "loc": [        -72.108354,        42.409698    ],    "pop": 4546,    "state": "MA"}</pre><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180814/add_document_dialog.jpg" style="max-width: 100%"></td></tr></li><br/><li>It's a good idea to validate the document before saving it. You can do that via the Validate button. The above data should produce a success message. Should errors be encountered, an error message will be presented with the first error in the document. The error will also include the line and column number in order to easily identify the error in the document:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180814/validation_error_message.jpg" style="max-width: 100%"></td></tr></li><br/><li>Click the Add button to close the dialog and save the new document. You should now see it in the Collection tab:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180814/new_doc_in_collection.jpg" style="max-width: 100%"></td></tr></li></ul><p>You can add more documents by following the same process as above:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180814/3_docs.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>Now that we've learned how to add documents to our collection, in the next blog, we'll cover how to view, delete, and edit documents in Navicat for MongoDB.</p>]]></description>
</item>
<item>
<title>Working with NoSQL Databases</title>
<link>https://www.navicat.com/company/aboutus/blog/767-working-with-nosql-databases.html</link>
<description><![CDATA[<b>Aug 10, 2018</b> by Robert Gravelle<br/><br/><p>The term "NoSQL" actually encompasses a wide variety of different database technologies that were developed in response to the demands dictated by modern applications and Internet of Things (IoT) devices. The massive volumes of new, rapidly changing data types created by the linking of numerous systems and devices have presented challenges for traditional DBMSes:</p><ul style="list-style-type: decimal;"><li>Relational databases were never designed to cope with the scale and agility challenges demanded by modern applications</li><br/><li>Nor were they built to take advantage of the cheap storage and processing power available to today's servers.</li></ul><p>One of the most popular NoSQL databases is MongoDB. In fact, it's the leading non-relational database in the world. As such, it's the perfect starting place for learning NoSQL operations like indexing, regular expression, data sharding, etc. In the next few blogs, we'll learn how to work with MongoDB using the new Navicat for MongoDB database management &amp; design tool. In today's tutorial, we start with the basics of database and document creation.</p><h1 class="blog-sub-title">Creating a new Database</h1><p>In this section we'll connect to our active MongoDB service and add a brand new database.</p><ul style="list-style-type: decimal;"><li>Launch the Navicat for MongoDB application.</li><br/><li>Click the Connection... button on the main toolbar and select MongoDB... from the list:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180810/connection_button.jpg" style="max-width: 100%"></td></tr></li><br/><li>On the New Connection dialog, enter a name in the Connection Name field:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180810/new_connection_dialog.jpg" style="max-width: 100%"></td></tr><br/>You can test the connection by clicking the Test Connection button.<br/><br/>If you need it, you can also obtain the server URI - i.e. mongodb://localhost:27017/ - by clicking the URI button.<br/><br/></li><br/><li>Click the OK button to close the dialog and create the connection. It will then appear in the left-hand Connection list.</li></ul><p>With our connection in place, we are now ready to create a new database.</p><ul style="list-style-type: decimal;"><li>Double-click the connection in the Connection list to open the database connection.</li><br/><li>Next, right-click the connection name and choose Create Database... from the context list.</li><br/><li>A dialog will appear in which you can provide the database name. After you have done so, click the OK button to close the dialog and create the new database.</li></ul><p>Behind the scenes, Navicat employs the MongoDB "use" command to create the database. It will then appear under the current connection in the left-hand Connection list.</p><p>Now, we'll add a collection. A Collection is a group of MongoDB documents. It is the equivalent of an DBMS table. A collection exists within a single database. Typically, all documents in a collection are related in purpose or otherwise similar in some way or another.</p><ul style="list-style-type: decimal;"><li>If you expand the Database object by clicking on the arrow to the left of the DB name, you will see all of the database objects, including its Collections:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180810/expanded_db.jpg" style="max-width: 100%"></td></tr><br/>Moreover, clicking on the Database object or any of its objects will enable applicable commands on the Database Objects Toolbar, in particular, the New Collection, Import Wizard, and Export Wizard buttons:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180810/database_objects_toolbar.jpg" style="max-width: 100%"></td></tr><br/><br/></li><li>Click the New Collection button. That will bring up a new Untitled Collection Tab:<br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180810/untitled_collection_tab.jpg" style="max-width: 100%"></td></tr><br/>It contains several child tabs for specifying all of the collection's attributes. We will cover these in a future blog. For now, we'll save it with the defaults.<br/><br/></li><li>Click the Save button on the Untitled Collection Tab and enter the Collection name in the prompt.</li></ul><p>The new collection will be added to the Database Explorer, under the Collections object:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180810/my_new_collection_in_db_explorer.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In today's tutorial, we learned the basics of database and document creation in MongoDB using the new Navicat for database management &amp; design tool. So far, the process has been similar than that of traditional DBMSes. In the next blog, we'll be getting into uncharted territory when we add documents to our collection.</p>]]></description>
</item>
<item>
<title>Navicat for MongoDB is Here</title>
<link>https://www.navicat.com/company/aboutus/blog/766-navicat-for-mongodb-is-here.html</link>
<description><![CDATA[<b>Jul 31, 2018</b> by Robert Gravelle<br/><br/><p>MongoDB is a different kind of database. Unlike traditional relational databases like SQL Server and MySQL, it stores data as JSON-like documents. While MongoDB's NoSQL approach does yield some advantages over its RDBMS competitors, it also makes it harder for makers of third-party database management tools to integrate support for MongoDB within their products, leaving users few options besides MongoDB's own <a class="default-links" href="https://www.mongodb.com/products/compass" target="_blank">Compass UI tool</a>.</p><p>That is, until now.</p><p>Navicat is proud to announce the addition of their newest GUI DB Management tool - Navicat for MongoDB - to their already extensive product line. It allows users to connect to local/remote MongoDB servers and is compatible with MongoDB Atlas. Navicat for MongoDB offers many note-worthy features for managing, monitoring, querying, and visualizing data. In today's blog, we'll explore a few of these.</p><h1 class="blog-sub-title">Connect Securely to your Databases</h1><p>It's never prudent to connect to remote databases over an unsecured network. That's why Navicat for MongoDB can establish secure connections through SSH Tunneling and SSL. It supports numerous database server authentication mechanisms including Kerberos, X.509 authentication and others.</p><h1 class="blog-sub-title">Collaboration Made Easy</h1><p>Like other Navicat products, Navicat for MongoDB includes the <a class="default-links" href="https://www.navicat.com/en/navicat-cloud" target="_blank">Navicat Cloud</a> service, which provides real-time access to your connection settings, queries and virtual groups. Navicat Cloud synchronizes connection settings, queries and virtual groups so that you can share them with your coworkers anytime and anywhere. It's just one of the many ways that Navicat for MongoDB helps you maximize productivity.</p><h1 class="blog-sub-title">Advanced Query Editing</h1><p>Unlike traditional relational databases, MongoDB uses a specialized query language to both filter and aggregate document data. Thanks to Navicat's Visual Query Builder, you don't have to learn a whole new querying language. It guides you in creating, editing and running your queries without having to worry about syntax and proper usage of commands. Code Completion helps you construct your queries faster while avoiding typos. Finally, Visual Query Builder includes a number of customizable Code Snippets that you can insert directly into the editor.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180731/query_builder.jpg" style="max-width: 100%"></td></tr><p>Aggregation queries are a special type of query that groups values from multiple documents together. The Aggregate Builder makes constructing aggregate queries a snap. Just add expressions to the pipeline until you've got the data you need!</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180731/aggregate_query.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Import and Export Wizards</h1><p>Having the ability to import and export data to and from your databases is an indispensable feature for database administration. Using the Import Wizard, you can transfer data into an existing MongoDB collection from a diverse array of formats, including text, CSV, JSON, and XML files. You can also choose to import from any ODBC data store over a data source connection.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180731/import_wizard_import_types.jpg" style="max-width: 100%"></td></tr><p>Likewise, data from collections, views, or query results may be exported to formats like Excel, Access, CSV and many more.</p><h1 class="blog-sub-title">Conclusion</h1><p>At last, database administrators and developers have a powerful new option for working with their MongoDB databases. Navicat for MongoDB is available for Windows, macOS and Linux. Visit the <a class="default-links" href="https://www.navicat.com/en/products/navicat-for-mongodb" target="_blank">product page</a> for more information or the <a class="default-links" href="https://www.navicat.com/en/download/navicat-for-mongodb" target="_blank">download page</a> to try it for yourself.</p>]]></description>
</item>
<item>
<title>Schedule Database Tasks using the Navicat Event Designer (Part 5)</title>
<link>https://www.navicat.com/company/aboutus/blog/764-schedule-database-tasks-using-the-navicat-event-designer-part-5.html</link>
<description><![CDATA[<b>Jul 24, 2018</b> by Robert Gravelle<br/><br/><p>A database event is a task that runs according to a schedule. Also known as "scheduled events", an event is similar to a cron job in UNIX or a task scheduler task in Windows, except that scheduled events are configured using a database's syntax and/or command-line-interface (CLI). Database events have many uses, such as optimizing database tables, cleaning up logs, archiving data, or generating complex reports during off-peak time.</p><p>In previous blogs on this topic, we learned how to configure events using MySQL as our database. Today, we're going to schedule a database task using the <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a> GUI Database Management Tool.</p><h1 class="blog-sub-title">The Navicat Event Designer</h1><p>In Navicat database management offerings, including Navicat Premium, the Event Designer is the tool for working with events. It's accessible by clicking on the Event button on the main toolbar:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180724/event_button_on_mail_toolbar.jpg" style="max-width: 100%"></td></tr><p>Clicking the Event button opens the Event object list in the Object pane. The Object pane toolbar contains three buttons: Design Event, New Event, and Delete Event. If you have no events defined, only the New Event button will be enabled.</p><h1 class="blog-sub-title">Creating a New Event</h1><p>Click the New Event button to open a new untitled Definition tab:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180724/untitled_event.jpg" style="max-width: 100%"></td></tr><p>You can enter any valid SQL procedure statement in the Definition tab. This can be a simple statement such as "INSERT INTO tbl_users (first_name,last_name) VALUES('Bob','Jones');", or it can be a compound statement written within BEGIN and END statement delimiters. Compound statements can contain declarations, loops, and other control structure statements.</p><p>Note that we don't have to write the CREATE EVENT code, as this is handled by Navicat (as we'll see in the following sections).</p><p>Here is an event definition that inserts a new row in the sakila.actor table:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180724/event_definition.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Scheduling your Event</h1><p>Navicat alleviates much of the burden of scheduling events by providing a form for entering scheduling details. The scheduling form is located on the Schedule tab. It supports the adding of Intervals that may comprise either simple or complex time units. Here's a simple example that sets the event to execute 5 minutes after creation:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180724/event_schedule_tab.jpg" style="max-width: 100%"></td></tr><p>Here's a more complex event schedule that starts in 5 minutes, and runs every five-and-a-half hours for 3 days:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180724/event_schedule_every.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Saving an Event</h1><p>To save an Event, click the Save button on the Even tab. If you like, you can preview the generated SQL on the SQL Preview tab before saving it:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180724/sql_preview.jpg" style="max-width: 100%"></td></tr><p><i>Note that the statement is read-only and cannot be edited in the preview.</i></p><h1 class="blog-sub-title">Adding Comments</h1><p>You can include comments with your Event on the Comment tab.</p><p>It adds them to the CREATE EVENT statement via the COMMENT clause:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180724/comments.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Deleting an Event</h1><p>To delete an Event, select it in the Object tab and click the Delete Event button. A warning dialog will ask you to confirm that you wish to delete the Event:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180724/delete_event_button.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Modifying an Event</h1><p>To modify an Event, select it in the Object tab and click the Design Event button. That will open it in the Event tab.</p>]]></description>
</item>
<item>
<title>Starting and Stopping MySQL 8 Events (Part 4)</title>
<link>https://www.navicat.com/company/aboutus/blog/757-starting-and-stopping-mysql-8-events-part-4.html</link>
<description><![CDATA[<b>Jul 17, 2018</b> by Robert Gravelle<br/><br/><p>Since version 5.1.6, MySQL has supported events. They employ a natural language scheduling syntax, so that you can say: "I want the MySQL server to execute this SQL statement every day at 11:30am, until the end of the year". To help you write your event statements, MySQL provides excellent <a class="default-links" href="https://dev.mysql.com/doc/refman/8.0/en/create-event.html" target="_blank">documentation</a> on CREATE EVENT syntax. Despite all of this, getting a firm grasp of event scheduling can still take some trial and error.</p><p>There are some challenges inherent to events, such as making an event recur, and making it run at a given time. Moreover, rather than having an event that just runs once or forever, you can also schedule a recurring event that is valid only within a specific time period, using START and END clauses.</p><p>In today's blog, we'll explore the syntax to create events to run according to various schedules, from very simple, to more complex.</p><h1 class="blog-sub-title">Scheduling Parameters</h1><p>An event's "schedule" can be a timestamp in the future, a recurring interval, or a combination of recurring intervals and timestamps.</p><p>The possibilities are:</p><ul style="list-style-type: disc;"><li>AT timestamp [+ interval integer_value time_keyword ]</li><li>EVERY interval</li><li>EVERY interval STARTS timestamp</li><li>EVERY interval ENDS timestamp</li><li>EVERY interval STARTS timestamp ENDS timestamp</li></ul><p>Here are two examples using the "AT timestamp" clause.</p><p>This event makes the MySQL server drop a table exactly 5 days from now:</p><pre>CREATE EVENT 'My event' ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 5 DAY DO DROP TABLE t;</pre><p>This event makes the MySQL server drop a table on February 24, 2018 at exactly 12 o'clock:</p><pre>CREATE EVENT The_Main_Event ON SCHEDULE AT TIMESTAMP '2018-02-24 12:00:00' DO DROP TABLE t;</pre><p><i>EVERY interval</i> means "Do this repeatedly". A recurring interval starts with EVERY, followed by a positive integer plus an INTERVAL interval, as we saw in the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/755-scheduling-mysql-8-events-part-3.html" target="_blank">last blog</a>.</p><p>For example, this event makes MySQL drop table t once each year, starting now:</p><p>CREATE EVENT e ON SCHEDULE EVERY 1 YEAR DO DROP TABLE t;</p><h1 class="blog-sub-title">The STARTS and ENDS Clauses</h1><p>An EVERY clause may contain an optional STARTS and/or ENDS clause.</p><p>STARTS is followed by a timestamp value that indicates when the action should begin repeating, and may also use + INTERVAL interval to specify an amount of time "from now". Not specifying STARTS is the same as using STARTS CURRENT_TIMESTAMP so that the event begins repeating immediately upon creation of the event.</p><p>An EVERY clause may also contain an ENDS clause. The ENDS keyword is followed by a timestamp value that tells MySQL when the event should stop repeating. Not using ENDS means that the event continues executing indefinitely.</p><p>"EVERY interval [ STARTS timestamp1 ] [ ENDS timestamp2 ]" means "Do this repeatedly, starting at timestamp1 if it's specified, ending at timestamp2 if it's specified". For example, this event tells the database to drop a table once each year, starting exactly 3 days from now:</p><pre>CREATE EVENT evt ON SCHEDULE EVERY 1 YEAR   STARTS CURRENT_TIMESTAMP + INTERVAL 3 DAY DO DROP TABLE t;</pre><br/><p>This event would cause MySQL to drop a table once each year for five years, starting exactly 2 days from now:</p><pre>CREATE EVENT e ON SCHEDULE EVERY 1 YEAR   STARTS CURRENT_TIMESTAMP + INTERVAL 2 DAY   ENDS CURRENT_TIMESTAMP + INTERVAL 5 YEAR DO DROP TABLE t;</pre><p>Now that we've gained an understanding of scheduling events, in the next blog we'll create some events using <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="_blank">Navicat Premium</a>.</p>]]></description>
</item>
<item>
<title>Scheduling MySQL 8 Events (Part 3)</title>
<link>https://www.navicat.com/company/aboutus/blog/755-scheduling-mysql-8-events-part-3.html</link>
<description><![CDATA[<b>Jul 10, 2018</b> by Robert Gravelle<br/><br/><p>Welcome to the third installment in our series on Database Events! <a class="default-links" href="" target="_blank">Part 1</a> outlined the difference between Database Events and Scheduled Tasks, as well as how to configure the Event Scheduler Thread in MySQL. In <a class="default-links" href="" target="_blank">Part 2</a>, we explored how to create MySQL events using the CREATE EVENT statement. Today's blog will delve deeper into how to schedule MySQL 8 Events - an essential topic that only received a cursory mention last time.</p><h1 class="blog-sub-title">Setting the Execution Interval</h1><p>Intervals play an important role in the defining of Events. Unless you are creating a one-time event that executes immediately, you have to specify an Interval which specifies some point in the future relative to the current date and time, for example, "two weeks from now". Moreover, in order to have an event reoccur, you have to provide an interval at which to do so, such as "every 6 hours".</p><p>Let's start with the event's initial execution time. It consists of the "AT CURRENT_TIMESTAMP" clause, followed by an optional " + INTERVAL interval". The latter part of the AT clause specifies how long to wait before executing. For example, the following event would execute one week after creation:</p><pre>CREATE EVENT my_eventON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 WEEK</pre><p>The interval portion is based on the intervals accepted by the DATE_ADD() function. These consists of two parts: a quantity and a unit of time. The units keywords are also the same, except that microseconds are not applicable to events.</p><p>Here are all the valid Interval unit values and the expected expression argument for each value:</p><head><style>table, th, td {    border: 1px solid black;    border-collapse: collapse;}th, td {    padding: 5px;    text-align: left;}</style></head><body><table border="1"><tr><th><b><font face="courier new">unit</font></b> Value</th><th>Expected <b><font face="courier new">expr</font></b> Format</th></tr><tr><td><font face="courier new">SECOND</font></td><td><font face="courier new">SECONDS</font></td></tr><tr>    <td><font face="courier new">MINUTE</font></td>    <td><font face="courier new">MINUTES</font></td>  </tr>  <tr>    <td><font face="courier new">HOUR</font></td>    <td><font face="courier new">HOURS</font></td>  </tr>  <tr>    <td><font face="courier new">DAY</font></td>    <td><font face="courier new">DAYS</font></td>  </tr>  <tr>    <td><font face="courier new">WEEK</font></td>    <td><font face="courier new">WEEKS</font></td>  </tr>  <tr>    <td><font face="courier new">MONTH</font></td>    <td><font face="courier new">MONTHS</font></td>  </tr>  <tr>    <td><font face="courier new">QUARTER</font></td>    <td><font face="courier new">QUARTERS</font></td>  </tr>  <tr>    <td><font face="courier new">YEAR</font></td>    <td><font face="courier new">YEARS</font></td>  </tr>  <tr>    <td><font face="courier new">MINUTE_SECOND</font></td>    <td><font face="courier new">'MINUTES:SECONDS'</font></td>  </tr>  <tr>    <td><font face="courier new">HOUR_SECOND</font></td>    <td><font face="courier new">'HOURS:MINUTES:SECONDS'</font></td>  </tr>  <tr>    <td><font face="courier new">HOUR_MINUTE</font></td>    <td><font face="courier new">'HOURS:MINUTES'</font></td>  </tr>  <tr>    <td><font face="courier new">DAY_SECOND</font></td>    <td><font face="courier new">'DAYS HOURS:MINUTES:SECONDS'</font></td>  </tr>  <tr>    <td><font face="courier new">DAY_MINUTE</font></td>    <td><font face="courier new">'DAYS HOURS:MINUTES'</font></td>  </tr>  <tr>    <td><font face="courier new">DAY_HOUR</font></td>    <td><font face="courier new">'DAYS HOURS'</font></td>  </tr>  <tr>    <td><font face="courier new">YEAR_MONTH</font></td>    <td><font face="courier new">'YEARS-MONTHS'</font></td></tr></table></body><p>Using the above table as a guide, if we wanted to express minutes and seconds, such as "two minutes and ten seconds", we would write:</p><pre>CREATE EVENT my_eventON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL MINUTE_SECOND '2:10'</pre><p>Note that:</p><ul style="list-style-type: decimal;"><li>Units are always expressed as singular (with no "s").</li><li>In the event definition above, the '2:10' is the expected expression argument, and the MINUTE_SECOND is the interval unit.</li><li>Interval types that combine two different intervals, e.g. minutes and seconds, are known as complex time units.</li></ul><p>In cases where there is no interval unit for a specific complex time unit, such as weeks and days, you can combine intervals. For example, AT CURRENT_TIMESTAMP + INTERVAL 3 WEEK + INTERVAL 1 DAY is equivalent to "two weeks and one day from now".</p><h1 class="blog-sub-title">Scheduling Reoccuring Events</h1><p>Many - if not most events - reoccur according to a specified schedule. The interval at which an event reoccurs is set using the "EVERY interval" clause. Here's the definition for an event that executes every two days:</p><pre>CREATE EVENT my_eventON SCHEDULE AT CURRENT_TIMESTAMPEVERY 2 DAY</pre><p>In the next blog, we'll learn how to set an event's start and end times.</p>]]></description>
</item>
<item>
<title>Working with MySQL Events (Part 2)</title>
<link>https://www.navicat.com/company/aboutus/blog/754-working-with-mysql-events-part-2.html</link>
<description><![CDATA[<b>Jul 3, 2018</b> by Robert Gravelle<br/><br/><p>Welcome back to our series on Database Events! <a class=default-links href="https://www.navicat.com/en/company/aboutus/blog/749-an-introduction-to-database-events-part-1.html" target="_blank">Part 1</a> outlined the difference between Database Events and Scheduled Tasks, as well as how to configure the Event Scheduler Thread in MySQL. In today's blog, we'll explore how to create MySQL events using CREATE EVENT syntax.</p><h1 class="blog-sub-title">Creating a New MySQL Event</h1><p>Creating an event is similar to creating other database objects such as stored procedures or functions. Like those objects, an event is a named database object that contains SQL statements. Here's the basic syntax:</p><pre>CREATE EVENT [IF NOT EXIST] event_nameON SCHEDULE scheduleDOevent_body</pre><p>A few things to note:</p><ul style="list-style-type: disc;"><li>The event name must be unique within a database schema.</li><li>If you have multiple SQL statements within the event body, you can wrap them in a BEGIN END block.</li></ul><p>Let's create an actual event to put the above syntax to use. We'll define and schedule a one-time event that inserts a message into a table called messages.</p><ul style="list-style-type: decimal;"><p>First, either find a suitable test database or create a new one. Then create a new table named "messages" by using the CREATE TABLE statement like so:</p><pre>CREATE TABLE IF NOT EXISTS messages (id INT PRIMARY KEY AUTO_INCREMENT,message VARCHAR(255) NOT NULL,created_at DATETIME NOT NULL);</pre><br/><li>Now it's time to create our event, using the CREATE EVENT statement:<br/><pre>CREATE EVENT IF NOT EXISTS test_eventON SCHEDULE AT CURRENT_TIMESTAMPDO  INSERT INTO messages(message,created_at)  VALUES('Test MySQL Event 1',NOW()); </pre></li><br/><li>That should add our message to the messages table immediately. Let's check the messages table by issuing a SELECT ALL on the messages table:<br/><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180703/messages_table_event_1.jpg" style="max-width: 100%"></td></tr></li></ul><h1 class="blog-sub-title">Preserving Events on Completion</h1><p>Events are automatically dropped when they expire. In the case of a one-time event like the one we created, it expired when it finished executing.</p><p>We can view all events of a database schema by issuing the following statement at the MySQL command prompt:</p><pre>mysql> SHOW EVENTS FROM test;Empty set</pre><p>To have events persist after they expire, we can use the ON COMPLETION PRESERVE clause. Here's a statement that creates another one-time event that is executed 30 seconds after its creation and not dropped after execution:</p><pre>CREATE EVENT test_event_2ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 30 SECONDON COMPLETION PRESERVEDO   INSERT INTO messages(message,created_at)   VALUES('Test MySQL Event 2',NOW());</pre><p>Wait for at least 30 seconds and check the messages table. Another record should be added:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180703/messages_table_event_2.jpg" style="max-width: 100%"></td></tr><p>Let's execute the SHOW EVENTS statement again. The event is there (albeit in a DISABLED state) because the effect of the ON COMPLETION PRESERVE clause:</p><pre>mysql> SHOW EVENTS FROM test;+------+--------------+----------------+-----------+----------+---------------------+----------------+----------------+--------+------+----------+------------+----------------------+----------------------+--------------------+| Db   | Name         | Definer        | Time zone | Type     | Execute at          | Interval value | Interval field | Starts | Ends | Status   | Originator | character_set_client | collation_connection | Database Collation |+------+--------------+----------------+-----------+----------+---------------------+----------------+----------------+--------+------+----------+------------+----------------------+----------------------+--------------------+| test | test_event_2 | root@localhost | SYSTEM    | ONE TIME | 2018-06-07 15:08:00 | NULL           | NULL           | NULL   | NULL | DISABLED |          1 | utf8mb4              | utf8mb4_general_ci   | utf8_general_ci    |+------+--------------+----------------+-----------+----------+---------------------+----------------+----------------+--------+------+----------+------------+----------------------+----------------------+--------------------+1 row in set (0.02 sec)</pre>]]></description>
</item>
<item>
<title>An Introduction to Database Events (Part 1)</title>
<link>https://www.navicat.com/company/aboutus/blog/749-an-introduction-to-database-events-part-1.html</link>
<description><![CDATA[<b>Jun 26, 2018</b> by Robert Gravelle<br/><br/><p>In the simplest terms, an event is any task that can be run according to a schedule. Many popular DBMSes include support for events. These are also known as "scheduled events" or as "temporal triggers" because events are triggered by time, as opposed to triggers, which are invoked by database operations such as table updates. Database events may be utilized for a variety of tasks such as optimizing database tables, cleaning up logs, archiving data, or to generate reports during off-peak times.</p><p>In today's blog, we'll learn how to view and activate database events. In subsequent blogs, we'll learn how to configure events for various tasks.</p><h1 class="blog-sub-title">Events vs. Scheduled Tasks</h1><p>Although database event are similar to a cron job in UNIX or a task scheduler in Windows, they differ in that events are managed and invoked at the database level as opposed to the Operating System (OS). Hence, database events are configured using the database's Data Definition Language (DDL) whereas cron jobs and scheduled tasks are defined using that OS's particular commands and/or tools.</p><h1 class="blog-sub-title">Configuring the Event Scheduler Thread</h1><p>Events are executed by a special thread. You can see the event scheduler thread and its current state by typing the "SHOW PROCESSLIST" command at the mysql&gt; prompt, provided that you have have the PROCESS privilege:</p><pre>mysql&gt;  SHOW PROCESSLIST;+----+-----------------+-----------------+--------+---------+------+-----------------------------+------------------+| Id | User            | Host            | db     | Command | Time | State                       | Info             |+----+-----------------+-----------------+--------+---------+------+-----------------------------+------------------+|  2 | event_scheduler | localhost:49670 | NULL   | Daemon  |    3 | Waiting for next activation |                  ||  3 | root            | localhost:49671 | NULL   | Sleep   |   43 |                             | NULL             ||  4 | root            | localhost:49672 | NULL   | Sleep   |  180 |                             | NULL             ||  5 | root            | localhost:56134 | sakila | Query   |    0 | starting                    | SHOW PROCESSLIST ||  6 | root            | localhost:56136 | sakila | Sleep   | 1025 |                             | NULL             |+----+-----------------+-----------------+--------+---------+------+-----------------------------+------------------+5 rows in set (0.01 sec)</pre><p style="font-size: 16px;"><b>Activating the Event Scheduler</b></p><p>Both the activating and enabling of the Event Scheduler is done via the global <font face="courier new">event_scheduler</font> system variable. One of the following three values may be assigned to it:</p><ul style="list-style-type: disc"><li><b>ON:</b> This starts the Event Scheduler; the event scheduler thread runs and executes all scheduled events. This is the default value.<br/><br/>When the Event Scheduler is ON, the event scheduler thread is listed in the output of SHOW PROCESSLIST as a daemon process, and its state is represented as "Waiting for next activation", as shown in the output above.<br/><br/>Either "ON" or its numeric equivalent of 1 are acceptable values. Thus, any of the following 4 statements can be used in the mysql client to turn on the Event Scheduler:<br/><br/><ul style="list-style-type: decimal;"><li>SET GLOBAL event_scheduler = ON;</li><li>SET @@global.event_scheduler = ON;</li><li>SET GLOBAL event_scheduler = 1;</li><li>SET @@global.event_scheduler = 1;</li></ul></li><br/><li><b>OFF:</b> Stops the Event Scheduler. The event scheduler thread does not run, is not shown in the output of SHOW PROCESSLIST, and no scheduled events are executed.<br/><br/>When the event_scheduler variable is set to OFF (Event Scheduler is stopped), it can be (re)started by setting the value of event_scheduler to ON.<br/><br/>It is also possible to use 0 in place of "OFF", so that any of these 4 statements can be used to turn off the Event Scheduler:<br/><br/><ul style="list-style-type: decimal;"><li>SET GLOBAL event_scheduler = OFF;</li><li>SET @@global.event_scheduler = OFF;</li><li>SET GLOBAL event_scheduler = 0;</li><li>SET @@global.event_scheduler = 0;</li></ul></li><br/><li><b>DISABLED:</b> This value puts the Event Scheduler thread to sleep so that the Event Scheduler is non-operational. Moreover, when the Event Scheduler is DISABLED, the event scheduler thread does not appear in the output of SHOW PROCESSLIST.<br/><br/><b>Note that the Event Scheduler state cannot be changed at runtime when disabled.</b></li></ul><p style="font-size: 16px;"><b>Displayed event_scheduler Values</b></p><p>Although ON and OFF have numeric equivalents, DISABLED has none. For that reason, event_scheduler values generated by either a SELECT or SHOW VARIABLES are always displayed using the full text value, i.e., either "OFF", "ON", or "DISABLED". For this reason, "ON" and "OFF" are recommended over 1 and 0 when setting the event_scheduler variable.</p><pre>mysql> SHOW VARIABLES like 'event_%';+-----------------+-------+| Variable_name   | Value |+-----------------+-------+| event_scheduler | OFF   |+-----------------+-------+1 row in set (0.02 sec)</pre>]]></description>
</item>
<item>
<title>Manage MySQL Users in Navicat Premium - Part 4: The Privilege Manager tool</title>
<link>https://www.navicat.com/company/aboutus/blog/745-manage-mysql-users-in-navicat-premium-part-4-the-privilege-manager-tool.html</link>
<description><![CDATA[<b>Jun 19, 2018</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 4: The Privilege Manager tool</h1><p>In this series, we've been exploring how to perform common user administration tasks using Navicat's flagship product, Navicat Premium. In the last blog, we looked at the Server Privileges, Privileges, and SQL Preview tabs of the New User Object tab.</p><p>Setting privileges for each user as we did in the last blog is not the only way to do so; the Privilege Manager offers another way to set privileges for a connection as well as its database objects. Available for MySQL, Oracle, PostgreSQL, SQL Server and MariaDB, the Privilege Manager will be the subject of today's blog.</p><h1 class="blog-sub-title">Working with Connection-level Privileges</h1><p>To access the Privilege Manager, click the <i>Privilege Manager</i> button on the <i>User Object</i> toolbar. That will open the Privilege Manager in a new tab with the connections for the most recently opened connection.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180619/privilege_manager_button.jpg" style="max-width: 100%;"></td></tr><p>From there, you can either work with Connection-level privileges or those associated with a particular database. Let's start with Connection-level privileges.</p><p>The Connection always appears at the top of the tree, with databases below it, along with individual objects within each:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180619/privilege_manager_connection.jpg"></td></tr><ul style="list-style-type: decimal;"><li>If it is not already selected, click the Connection name to see a list of users that have access, along with their respective privileges.</li><br/><li>Scroll the grid horizontally until reaching the target privilege that you're interested in.</li><br/><li>Check the box beside the privilege to assign it to that user. For example, clicking the Trigger checkbox in the bob_s@localhost row will grant Trigger privileges that that user:<br/><br/><img src="https://www.navicat.com/link/Blog/Image/2018/20180619/privilege_manager_trigger.jpg" style="max-width: 100%;"><br/><br/>Conversely, unchecking the box for a privilege removes that privilege.</li><br/><li>Don't forget to click the Save button on the Privilege Manager toolbar to commit your changes. That being said, if you do, Navicat will ask you if you'd like to save your changes when you close the Privilege Manager tab.</li></ul><h1 class="blog-sub-title">Working with Database-level Privileges</h1><p>Clicking a Database name displays a list of users that have access to it, along with their respective privileges. To assign <i>Trigger</i> privileges to bob_s@localhost on the Sakila Database:</p><ul style="list-style-type: decimal;"><li>Click the Sakila Database in the Object tree.</li><br/><li>Scroll the grid horizontally until reaching the <i>Trigger</i> privilege checkbox.</li><br/><li>Check the box next to the <i>Trigger</i> privilege to assign it to that user:<br/><br/><img src="https://www.navicat.com/link/Blog/Image/2018/20180619/privilege_manager_sakila_trigger.jpg" style="max-width: 100%;"><br/><br/>Conversely, unchecking the box for the privilege listed removes that privilege.</li><br/><li>Once again, don't forget to click the Save button on the Privilege Manager toolbar to commit your changes. If you do, Navicat will ask you if you'd like to save your changes when you close the Privilege Manager tab.</li></ul><h1 class="blog-sub-title">Managing Privileges for Database Objects</h1><p>To grant privileges for specific Database objects such as Tables, Views, Functions, and Stored Procedures, use the <i>Add Privilege</i> Privilege Manager tab toolbar button.</p><p>For example:</p><ul style="list-style-type: decimal;"><li>Expand the node in the tree view until reaching to the target object. The following image shows the sakila database's <i>film_in_stock</i> stored procedure:<br/><br/><img src="https://www.navicat.com/link/Blog/Image/2018/20180619/privilege_manager_film_in_stock_proc.jpg" style="max-width: 100%;"></li><br/><li>Choose the <i>film_in_stock</i> object and click the <i>Add Privilege</i> button to open the dialog.</li><br/><li>Check the box beside the user on the left pane.</li><br/><li>In the grid, check the relevant options against the privileges listed to grant that object privilege to the selected user. For instance, the following would grant Execute privileges to the bob_s@localhost and secure_admin_99@localhost users for the <i>film_in_stock</i> procedure on the <i>sakila</i> database:<br/><br/><img src="https://www.navicat.com/link/Blog/Image/2018/20180619/add_privilege_dialog.jpg" style="max-width: 100%;"></li><br/><li>Click the <i>OK</i> button to close the dialog and commit your changes. The new privileges will now appear in the grid:<br/><br/><img src="https://www.navicat.com/link/Blog/Image/2018/20180619/execute_privileges_in_privilege_manager_tab.jpg" style="max-width: 100%;"></li></ul><p>To revoke privileges for a user on any Object, click the <i>Delete Privilege</i> button. For example, to revoke bob_s@localhost's privileges for the <i>film_in_stock</i> procedure on the <i>sakila</i> database that we just added:</p><ul style="list-style-type: decimal;"><li>Make sure that the <i>film_in_stock</i> procedure is selected in the tree view.</li><br/><li>Select the bob_s@localhost row in the grid to highlight it.</li><br/><li>Now click the <i>Delete Privilege</i> button to remove that row from the grid.</li><br/><li>Your changes will be committed when you save your settings.</li></ul>]]></description>
</item>
<item>
<title>Manage MySQL Users in Navicat Premium - Part 3: Configuring User Privileges</title>
<link>https://www.navicat.com/company/aboutus/blog/738-manage-mysql-users-in-navicat-premium-part-3-configuring-user-privileges.html</link>
<description><![CDATA[<b>Jun 12, 2018</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 3: Configuring User Privileges</h1><p>In this series, we've been exploring how to perform common user administration tasks using Navicat's flagship product, Navicat Premium. In Part 1, we learned how to secure the MySQL root account using the Navicat Premium User Management Tool. <a class=default-links href="https://www.navicat.com/en/company/aboutus/blog/730-manage-mysql-users-in-navicat-premium-part-1-securing-the-root-2.html" target="_blank">Part 2</a> focussed on setting a new user's account details, account limits, and SSL settings. In today's blog, we'll move on to the remaining tabs of the New User Object tab: namely, Server Privileges, Privileges, and SQL Preview.</p><h1 class="blog-sub-title">Server Privileges</h1><p>This tab contains a list of privileges that apply to the server connection as a whole. To assign a privilege, simply check the option against the server privilege listed. For example, the following configuration assigns Select, Update, Insert, and Delete privileges to our new bob_s@localhost user for the entire server:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180612/server_privileges.jpg" style="max-width: 100%"></td></tr><p>Rather than select individual checkboxes, you can also grant and revoke all listed privileges at once by right-clicking anywhere on the Server Privileges tab and choosing the appropriate option from the context menu:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180612/server_privileges_popup_menu.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Privileges</h1><p>Want to assign privileges for a specific database? The Privileges tab is the place to do that. I shows each registered database for a connection, along with a list of privileges, listed in each row. Here's the Privileges tab assigning Create, Drop, and Alter privileges to our user on the Sakila database:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180612/privileges.jpg" style="max-width: 100%"></td></tr><p>Now our new bob_s@localhost user has Create, Drop, and Alter privileges on the Sakila database, in addition to Select, Update, Insert, and Delete privileges for the entire server.</p><p style="font-size: 16px;"><b>Showing/Hiding Columns</b></p><p>Due the large number of privileges, you'll likely have to scroll horizontally to see some of them. However, if you are not interested in some privileges, you can hide them by right-clicking anywhere within the tab and choosing <i>Show/Hide Columns</i> from the context menu. That will display a list of column names that you may show or hide by checking or unchecking the associated checkbox. This configuration removes several admin-related privileges from the table:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180612/privileges_show-hide_columns_list.jpg" style="max-width: 100%"></td></tr><p>Note that columns are added and removed after the Save operation.</p><h1 class="blog-sub-title">Viewing SQL Statements</h1><p>You can preview the SQL statements generated by Navicat before committing your changes on the SQL Preview tab. Statements are read-only and should only be used to verify your changes:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180612/sql_preview.jpg" style="max-width: 100%"></td></tr><p>Upon saving your changes, the SQL Preview tab contents are cleared so that the same statements are not executed again.</p><h1 class="blog-sub-title">User Information</h1><p>After adding our new user, the User tab name will be updated from "Untitled (MYSQL) - User" to "bob_s@localhost (MYSQL) - User" where "MYSQL" is the connection name. If the Information Pane is visible, you'll see a short synopsis of the user's rights, including the <i>SSL Type</i>, <i>Max queries per hour</i>, <i>Max updates per hour</i>, <i>Max connections per hour</i>, and <i>Max user connections</i>, as well as whether or not they are a <i>Superuser</i>:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180612/user_information.jpg" style="max-width: 100%"></td></tr><p><i>Note that you may have to refresh the tab to see the latest stats.</i></p><p>You can display the Information Pane via <i>View</i> &gt; <i>Information Pane</i> &gt; <i>View Information Pane</i> from the main menu:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180612/show_information_pane.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Going Forward</h1><p>In Part 4, we'll learn how to manage privileges from one place using the Privilege Manager tool.</p>]]></description>
</item>
<item>
<title>Manage MySQL Users in Navicat Premium - Part 2: Creating a New User</title>
<link>https://www.navicat.com/company/aboutus/blog/730-manage-mysql-users-in-navicat-premium-part-1-securing-the-root-2.html</link>
<description><![CDATA[<b>Jun 5, 2018</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 2: Creating a New User</h1><p>In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/728-manage-mysql-users-in-navicat-premium-part-1-securing-the-root.html" target="_blank">Part 1</a>, we learned how to secure the MySQL root account using the Navicat Premium User Management Tool. Today's blog will focus on setting a new user's account details, account limits, and SSL settings.</p><h1 class="blog-sub-title">The General Tab</h1><p>Clicking the New User button on the Objects toolbar opens an Untitled User tab. It, in turn, contains five tabs named General, Advanced, Server Privileges, Privileges, and SQL Preview. We covered the General tab in Part 1, but we'll quickly recap here. On the General tab, we need to provide:</p><ul style="list-style-type: decimal;"><li>The <i>User Name</i>.</li><li>The database <i>Host</i>.</li><li>The encryption <i>Plugin</i>. Choose "mysql_native_password" or "sha256_password" from the dropdown.</li><li>The <i>Password</i>.</li><li>The <i>Expire Password Policy</i>.</li></ul><p style="font-size: 16px;"><b>Setting the Password Policy</b></p><p>MySQL enables database administrators to expire account passwords manually, and to establish a policy for automatic password expiration using either the MySQL mysql_native_password or sha256_password built-in authentication plugin.</p><p>Navicat abstracts the usual MySQL mechanism for setting password expiration using the PASSWORD EXPIRE statement by providing several options via a dropdown list. They are:</p><ul style="list-style-type: disc;"><li>DEFAULT: Sets the password expiration length to the database default. Prior to version 5.7.11, the Default Value was 360 days. Since version 5.7.11 onwards, the Default Value is 0 days, which effectively disables automatic password expiration.</li><li>IMMEDIATE: Expires an account password, thus forcing the user to update it.</li><li>INTERVAL: Specifies the number of days in which the current password expires.</li><li>NEVER: Allows the current password to remain active indefinitely. Useful for scripts and other automated processes.</li></ul><p>Here's an Example:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180605/new_user_general_tab.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">The Advanced Tab</h1><p>Here you'll find settings for account limits, and SSL.</p><p style="font-size: 16px;"><b>Account Limits</b></p><p>MySQL permits limits for individual accounts on use of various server resources so that any one user may not monopolize resources. Limits include:</p><ul style="list-style-type: disc;"><li>The number of queries an account can issue per hour.</li><li>The number of updates an account can issue per hour.</li><li>The number of times an account can connect to the server per hour.</li><li>The total number of database connections an account can make.</li></ul><p>These equate to the <i>Max queries per hour</i>, <i>Max updates per hour</i>, <i>Max connections per hour</i>, and <i>Max user connections</i> Advanced tab fields. Each of these fields accept a value of zero (0) or a positive integer.</p><p style="font-size: 16px;"><b>SSL Settings</b></p><p>In order to use encrypted connections, OpenSSL or yaSSL must be present in your system. Also, the MySQL server needs to be built with TLS support and be properly configured to use one of them. Note that the term SSL, refers to the old, now insecure, protocol preceding TLS, is still used in many of the variable names and options for compatibility reasons although MySQL only uses its more secure (TLS) successors.</p><p>The <i>SSL Type</i> dropdown field maps to the ssl_type column of the mysql.user table, which only accepts certain values: ANY, SPECIFIED, and X509 (as well as '' for NONE).</p><p>Moreover, the MySQL GRANT statement also accepts the ISSUER, SUBJECT, and CIPHER options. These can be combined together in any order, and if you use any of them REQUIRE X509 is implicit.</p><p>Here's a GRANT statement, followed by the equivalent Advanced tab in Navicat:</p><pre>GRANT USAGE ON *.* TO 'bob_s'@'localhost'    REQUIRE SUBJECT '/CN=www.mydom.com/O=My Dom, Inc./C=US/ST=Oregon/L=Portland'    AND ISSUER '/C=FI/ST=Somewhere/L=City/ O=Some Company/CN=Peter Parker/emailAddress=p.parker@marvel.com'    AND CIPHER 'SHA-DES-CBC3-EDH-RSA';</pre><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180605/advanced_tab.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Going Forward</h1><p>In Part 3, we'll move on to the last three User tabs.</p>]]></description>
</item>
<item>
<title>Manage MySQL Users in Navicat Premium - Part 1: Securing the Root </title>
<link>https://www.navicat.com/company/aboutus/blog/728-manage-mysql-users-in-navicat-premium-part-1-securing-the-root.html</link>
<description><![CDATA[<b>May 29, 2018</b> by Robert Gravelle<br/><br/><h1 class="blog-sub-title">Part 1: Securing the Root Account</h1><p>Managing the users of a database is one of the key responsibilities of the database administrator (DBA). Coordinating how users in your organization access your database typically entails many separate tasks, from adding new users, blocking access to users who have left the organization, and helping users who cannot log in.</p><p>MySQL ships with the mysqladmin command-line client for performing administrative operations. You can use it to check the server's configuration and current status, to create and drop databases, and more. For DBAs who prefer something a little more sophisticated, Navicat for MySQL and Premium includes everything you need to manage your MySQL users so that you don't ever have to launch a separate command window. In this series, we'll explore how to perform common user administration tasks from within Navicat. Today's blog describes the three default MySQL user accounts and how the secure the root user.</p><p>Although we'll be using Navicat Premium for the purposes of this blog, keep in mind that Navicat for MySQL includes the same functionality, but specifically targeting MySQL.</p><h1 class="blog-sub-title">Default User Accounts</h1><p>User management functionality is accessible via the User button. Clicking it displays the Objects tab, which includes all of the registered users for the MySQL connection.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180529/objects_tab.jpg" style="max-width: 100%;"></td></tr><p>The above image shows the default user accounts. During installation, MySQL creates three user accounts that should be considered reserved:</p><ul style="list-style-type: disc"><li>'root'@'localhost: The super user. This account has all privileges and can perform any operation.<br/><br/>Strictly speaking, this account name is not reserved, in the sense that you can (and, in production environments, should!) rename the root account to something else to avoid exposing a highly privileged account with the widely-known default name.</li><br/><li>'mysql.sys'@'localhost': Used as the DEFINER for sys schema objects. Use of the mysql.sys account avoids problems that occur if a DBA renames or removes the root account. This account is locked so that it cannot be used for client connections.</li>  <br/><li>'mysql.session'@'localhost': Used internally by plugins to access the server. This account is locked so that it cannot be used for client connections.</li></ul><h1 class="blog-sub-title">Editing User Details</h1><p>If we wanted to view and/or modify the details of a user, we could either double-click it or highlight it in the Objects tab and then click the Edit User button on the Objects toolbar. That opens an Editor tab for that user. It, in turn, contains five tabs named General, Advanced, Server Privileges, Privileges, and SQL Preview. We'll cover each of these tabs in greater detail in the next installment, but for now, let's see how we could change some data on the General tab to secure the root account.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180529/root_user_in_general_tab.jpg" style="max-width: 100%;"></td></tr><ul style="list-style-type: decimal;"><li>It is common knowledge that the "root" account is the super user. Therefore, our first action should be to change it to something less intuitive, like "secure_admin_99". The addition of numbers makes it that much harder to guess.</li><br/><li>Choose the sha256_password plugin.<br/><br/>In all versions of MySQL Server since version 5.5, the default password mechanism is implemented in the mysql_native_password authentication plugin (which is enabled by default). This mechanism leverages SHA1 hashing. While this algorithm was considered secure back in the days of MySQL 4.1, it now has known weaknesses that may be exploitable within several years.<br/><br/>The sha256_password plugin was introduced in MySQL Server 5.6, and provides additional security focused on password storage. It does so by addressing the two key elements which make mysql_native_password vulnerable: hash computation becomes more expensive/time-consuming, and the output is randomized. Additionally, using the stronger SHA-256 algorithm provides eliminates dependencies on the vulnerable SHA1 algorithm.</li><br/><li>Provide a strong password.<br/><br/>Strong passwords should be difficult to guess or crack. A good password:<br/><br/><ul style="list-style-type: circle;"><li>Is at least eight characters long.</li><li>Doesn't contain your user name, real name, or company name.</li><li>Doesn't contain a complete word.</li><li>Is significantly different from previous passwords.</li><li>Contains uppercase letters, lowercase letters, numbers, and symbols.</li></ul></li><br/><li>Provide an Expire Password Policy.<br/><br/>By specifying an interval, we can have MySQL prompt users to change their password after a certain number of days have elapsed, such as 90 days.</li></ul><p>Here is the General tab again with the updated fields:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180529/root_user_secured.jpg" style="max-width: 100%;"></td></tr><p>Click the Save button to update the account settings.</p><h1 class="blog-sub-title">Going Forward</h1><p>In the next installment, we'll learn how to create new users and assign their privileges.</p>]]></description>
</item>
<item>
<title>Navigation Pane Tips and Tricks Part 2: Virtual Grouping and Connection Colouring</title>
<link>https://www.navicat.com/company/aboutus/blog/720-navigation-pane-tips-and-tricks-part-2-virtual-grouping-and-connection-colouring.html</link>
<description><![CDATA[<b>May 23, 2018</b> by Robert Gravelle<br/><br/><p>The Virtual Group feature provides a mechanism for logical grouping of the Navigation Pane's database objects by category, so that all objects are more effectively organized. It can be applied to many different object types, including:</p><ul style="list-style-type: disc"><li>Connections</li><li>Tables</li><li>Views</li><li>Functions</li><li>Queries</li><li>Reports</li><li>Backups</li><li>Automations</li><li>Models</li></ul><p>Virtual Grouping is supported by all Non-Essentials Editions of Navicat's database management and design products, including Navicat MySQL, MariaDB, SQL Server, SQLite, Oracle, PostgreSQL, and Premium.</p><p>In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/719-navigation-pane-tips-and-tricks-part-1-managing-connections.html" target="_blank">part 1</a> of this 2 part series, we learned how to manage connections within the Navigation Pane. In today's conclusion, we'll explore the Virtual Grouping and Connection Colouring features.</p><h1 class="blog-sub-title">Creating a New Group</h1><p>It's quite easy to create a new group. In fact, it only takes 2 steps!</p><ul style="list-style-type: decimal;"><li>In the main window, right-click anywhere in the Navigation pane or the Objects tab and select <i>New Group</i> or <i>Manage Group</i> -&gt; <i>New Group</i> from the context menu.</li><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180523/new_group_menu_command.jpg" style="max-width: 100%;"></td></tr><br/><br/><li>The new group with then appear in the Navigation Pane as a text field. Enter a name in the new group text field and press ENTER to save the name.</li><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180523/new_group.jpg" style="max-width: 100%;"></td></tr></ul><p>Once you've created a new menu, right-clicking it will present a context menu that allows you to:</p><ul style="list-style-type: decimal;"><li>Add a new connection to the group.</li><li>Create another new group.</li><li>Delete the group.</li><li>Rename the group.</li><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180523/group_menu_commands.jpg" style="max-width: 100%;"></td></tr></ul><h1 class="blog-sub-title">Moving an Object to a Group</h1><p>Once you've created a group, there are a couple of ways to move an object to it.</p><ul style="list-style-type: decimal;"><li>In the main window, right-click an object and select <i>Manage Group</i> -&gt; <i>Move To</i>.</li><p></p><li>Select an existing group.</li><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180523/move_to_group_command.jpg" style="max-width: 100%;"></td></tr></ul><p>Alternatively, you can simply drag an object to the group.</p><h1 class="blog-sub-title">Moving an Object out of a Group</h1><p>A similar process may be employed to move an object out of a group:</p><ul style="list-style-type: decimal;"><li>In the main window, right-click an object and select <i>Manage Group</i> -&gt; <i>Exclude From Group</i>.</li><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180523/exclude_from_group_command.jpg" style="max-width: 100%;"></td></tr></ul><p>You can also move an object out of a group using drag &amp; drop.</p><h1 class="blog-sub-title">Hiding the Group Structure</h1><p>If you want to hide the group structure, you can select the <i>View</i> -&gt; <i>Navigation Pane</i> -&gt; <i>Flatten Connection</i> and <i>View</i> -&gt; <i>Flatten Object List</i> commands from the main menu.</p><h1 class="blog-sub-title">Connection Colorings</h1><p>Navicat supports connection highlighting via different colors for easier identification of connections and their database objects. It lets you immediately know which connection a database belongs to when you're working with its objects. The highlighted color displays in the Navigation pane as well as the menu bar or tab if it's the object window.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180523/connection_colors.jpg" style="max-width: 100%;"></td></tr><p>To highlight a connection, right-click it in the Navigation pane and select <i>Color</i>.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180523/color_command.jpg" style="max-width: 100%;"></td></tr><p>Selecting <i>Other...</i> from the list opens the system color dialog so that you can create your own custom color.</p><p>To remove the color from the connection, right-click it and select None from the color context menu.</p><h1 class="blog-sub-title">Conclusion</h1><p>In this 2 part series, we learned how to manage connections within the Navigation Pane, apply Virtual Grouping to Navigation Pane objects, and differentiate connections using various colors.</p>]]></description>
</item>
<item>
<title>Navigation Pane Tips and Tricks Part 1: Managing Connections</title>
<link>https://www.navicat.com/company/aboutus/blog/719-navigation-pane-tips-and-tricks-part-1-managing-connections.html</link>
<description><![CDATA[<b>May 15, 2018</b> by Robert Gravelle<br/><br/><p>All of Navicat's database management and design products, i.e. Navicat MySQL, MariaDB, SQL Server, SQLite, Oracle, PostgreSQL, and Premium, include a Navigation Pane. It provides more than a means to navigate between your connections, schemas, databases and database objects. In Non-Essentials Editions, it also features Virtual Grouping, which is a logical grouping of objects by categories. In today's tip, we'll be going over how to manage your connections within the Navigation Pane. In part 2 we'll learn how to utilize Virtual Grouping.</p><h1 class="blog-sub-title">Navigation Pane Basics</h1><p>Located on the left-hand side of the Navicat GUI, the Navigation pane employs a tree structure which allows you to invoke actions upon the database and its objects through pop-up menus. The Navigation pane is visible by default, but can be toggled via the View -&gt; Navigation Pane -&gt; Show Navigation Pane command from the main menu.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180515/navigation pane in GUI.jpg" style="max-width: 100%;"></td></tr><p>To connect to a database or schema, you can simply double-click it in the pane.</p><p>When logged in to Navicat Cloud, the Navigation pane is split into a Navicat Cloud and My Connections sections.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180515/navigation pane with Navicat Cloud section.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Viewing Options</h1><p>In the Options window, there is an option to <i>Show objects under schema in navigation pane</i>. When checked, all database objects are also displayed in the pane.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180515/navigation pane options.jpg" style="max-width: 100%;"></td></tr><p>Otherwise, active connections will display without an expand arrow beside them.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180515/navigation pane without db objects.jpg" style="max-width: 100%;"></td></tr><p>Note that the <i>Show objects under schema in navigation pane</i> option does not affect currently active connections.</p><p>To show only those objects whose connection is currently active, choose <i>View -&gt; Navigation Pane -&gt; Show Only Active Objects</i> from the main menu.</p><p>Finally, you can hide the group structure in the Navigation pane by selecting <i>View -&gt; Navigation Pane -&gt; Flatten Connection</i> from the main menu.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180515/flatten connection command.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Filtering</h1><p>You can filter the tree by setting the focus on any object within the tree and type a search string. As you type, objects that do not match the search string will be hidden from view.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180515/filtering.jpg" style="max-width: 100%;"></td></tr><p>Clearing the search field restores hidden objects.</p><h1 class="blog-sub-title">Moving/Copying a Connection to a Project</h1><p>All Navicat Cloud objects are located under different projects. These may be shared between Navicat Cloud accounts for collaboration. Quite often, you'll want to move or copy a local connection to a project for sharing. Here's how:</p><ul style="list-style-type: decimal;"><li>Right-click a connection under My Connections and choose <i>Move Connection</i> to or <i>Copy Connection</i> to from the popup menu.</li>  <br/><li>You can either select an existing project or create a new one. </li>  <br/><li>The connection will then be moved or copied to Navicat Cloud. All query files and virtual groups associated with the connection will also be moved or copied.</li></ul><br/><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180515/move connection command.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Move/Copy a Connection to My Connections</h1><p>Likewise, you may also want to move or copy a connection from the Cloud to your local connections, under My Connections.  To do that:</p><li>Right-click a connection under Navicat Cloud and choose <i>Move Connection to -&gt; My Connections</i> or <i>Copy Connection to -&gt; My Connections</i>. </li><br/><li>The connection will then be moved or copied to My Connections. All query files and virtual groups associated with the connection will also be moved or copied to the local machine.</li><h1 class="blog-sub-title">Conclusion</h1><p>In today's tip, we learned how to manage your connections within the Navigation Pane. In part 2 we'll explore how to utilize Virtual Grouping.</p>]]></description>
</item>
<item>
<title>What to Monitor on SQL Server (Part 2)</title>
<link>https://www.navicat.com/company/aboutus/blog/708-what-to-monitor-on-sql-server-part-2.html</link>
<description><![CDATA[<b>May 8, 2018</b> by Robert Gravelle<br/><br/><p>In <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/707-what-to-monitor-on-sql-server-part-1.html" target="_blank">What to Monitor on SQL Server (Part 1)</a>, we reviewed two of the four main categories of performance metrics to monitor in order to gauge SQL Server efficacy, namely Disk Activity and Processor Utilization. Today's blog will cover Memory and Server operations.</p><h1 class="blog-sub-title">Memory</h1><p>Memory Utilization monitoring attempts to determine the amount of memory used by the database server while processing a request. You should monitor your instance of SQL Server periodically to confirm that memory usage is within typical ranges.</p><p>By default, SQL Server dynamically grows and shrinks the size of its buffer pool (cache), depending on the physical memory load that the operating system reports. As long as sufficient memory (between 4 MB and 10 MB) is available to prevent paging, the SQL Server buffer pool will continue to grow. As other processes on the same computer as SQL Server allocate memory, the SQL Server buffer manager will release memory as needed. SQL Server can free and obtain several megabytes of memory each second. This allows for SQL Server to quickly adjust to memory allocation changes.</p><p>SQL Server works with objects and counters, with each object comprising one or more counters. For example, the Buffer Manager object provides counters to monitor how SQL Server uses memory to store data pages, and buffer pool.</p><p>To monitor for a low-memory condition, use the following counters:</p><p style="margin-left: 24px;">Available MBs: indicates how much memory is available for new processes. If available memory is constantly low and server load cannot be reduced, its necessary to add more RAM.</p><p style="margin-left: 24px;">Pages/sec: this counter indicates how many times the virtual memory is getting accessed. A rule of thumb says that it should be lower than 20. Higher numbers might mean excessive paging. Using Memory: Page Faults/sec can further indicate whether SQL Server or some other process is causing it.</p><p>You can also establish upper and lower limits for how much memory is used by the SQL Server database engine with the min server memory and max server memory configuration options.</p><h1 class="blog-sub-title">Other Server-related Metrics</h1><p>Although Disk Activity, Processor Utilization, and Memory are the most important areas to monitor, there are a few other general server metrics worth checking.</p><p style="margin-left: 24px;">Access Methods  Full scans/sec: higher numbers (more than 1 or 2) may mean you are not using indexes and resorting to table scans instead.</p><p style="margin-left: 24px;">Buffer Manager  Buffer Cache hit ratio: This is the percentage of requests serviced by data cache. When cache is properly used, this should be over 90%. The counter can be improved by adding more RAM.</p><p style="margin-left: 24px;">Memory Manager  Target Server Memory (KB): indicates how much memory SQL Server wants. If this is the same as the Memory Manager  Total Server Memory (KB) counter (see below), then you know SQL Server has all the memory it needs.</p><p style="margin-left: 24px;">Memory Manager  Total Server Memory (KB): much memory SQL Server is actually using. If smaller than the Memory Manager  Target Server Memory (KB), then SQL Server could benefit from more memory.</p><p style="margin-left: 24px;">Locks  Average Wait Time: This counter shows the average time needed to acquire a lock. This value needs to be as low as possible. If unusually high, you may need to look for blocking processes. You may also need to examine your users SQL statements, as well as check for any other I/O bottlenecks.</p><p>Although these metrics are some of the most useful, SQL Server offers a number of other metrics that may also come in handy. We'll examine these in a future blog.</p>]]></description>
</item>
<item>
<title>What to Monitor on SQL Server (Part 1)</title>
<link>https://www.navicat.com/company/aboutus/blog/707-what-to-monitor-on-sql-server-part-1.html</link>
<description><![CDATA[<b>May 2, 2018</b> by Robert Gravelle<br/><br/><p>Microsoft SQL Server is more than 30 years old now, and remains one of the most popular commercial relational databases in use today. It runs very efficiently with only minimal tweaking, but can also be tuned for optimal performance. Before fine tuning your SQL Server database, you first have to monitor its performance over a broad spectrum of conditions and workloads. In todays tip, well review a few of the most instructive metrics to monitor in order to gauge server performance.</p><h1 class="blog-sub-title">Benefits of Performance Monitoring</h1><p>To keep your database server running smoothly, its crucial to monitor its performance on a regular basis. A good SQL Server monitoring plan can help you stay on top of:</p><ul style="list-style-type: disc"><li>Performance: Monitoring database performance can help uncover possible bottlenecks and other issues as soon as they happen, so that youre better prepared for future occurrences. Beyond being proactive, performance metrics can help guide you in deciding whether or not performance increase is warranted. For example, monitoring queries as they are executed might reveal stalwarts that require modification.</li><br/><li>Growth: Database traffic tends to increase faster than predicted. By observing user and traffic patterns, you can anticipate future upgrades.</li><br/><li>Security: People tend to associate the term database security with auditing. While auditing is instrumental in tracking down the source of unauthorized database use and, depending on the product used, can potentially stop it in its tracks. However, performance monitoring can help confirm that adequate security measures have been applied.</li><br/></ul><h1 class="blog-sub-title">Performance Metrics</h1><p>SQL Server performance metrics generally target one of four components: Disk Activity, Processor Utilization, Memory, and the Server itself:</p><p style="font-size: 18px">Disk Activity</p><ul style="list-style-type: disc"><li>% Disk Time: This counter monitors the portion of time the disk is busy with read/write activity. Its value is the Average Disk Queue Length value represented in percents (i.e. multiplied by 100). If Average Disk Queue Length is 1, % Disk Time is 100%. If the value is higher than 90% per disk, additional investigation is needed. First, check the Current Disk Queue Length value. If its higher than the threshold of 2 per physical disk, monitor if the high values occur frequently.</li><br/><li>Average Disk Queue Length: The number of I/O operations waiting. For example, in a 6 disk array the Current Disk Queue Length value of 12 means that the queue is 2 per disk. The number of pending I/O requests should not rise consistently over 1.5 to 2 times the number of spindles of the physical disk.</li><br/><li>Page reads/sec and page writes/sec: The SQL Server Buffer Manager metrics Page reads per second and page writes per second show how many times the pages were read/written from/to disk, in one second. This is a server-level metric, hence the number indicates page reads for all databases on the instance. The recommended page reads/sec and page writes/sec value should be under 90. Higher values usually indicate insufficient memory and indexing issues.</li></ul><p style="font-size: 18px">Processor Utilization</p><ul style="list-style-type: disc;"><li>% Processor time: The percentage of time that the processor spends on executing user processes such as SQL Server. In other words, this is the percentage of processor non-idle time spent on user processes. Note that multiprocessor systems have a separate instance for each CPU. The recommended <i>% Processor Time</i> value is 80%, hence a consistent 80-90% is too high and should be addressed.</li><br/><li>% Privileged time: Indicates the time spent on Windows kernel commands (i.e. SQL Server I/O requests). If both this and Physical Disk counters are high, you may require a faster disk or lower the load for this server.</li><br/><li>% user time: The percentage of time the CPU spends on user processes.</li><br/><li>Queue Length: The number of threads waiting for processor time. A high number may indicate the need for faster or additional processors.</li></ul><p>In part 2, well move on to metrics that measure Memory and Server operations.</p>]]></description>
</item>
<item>
<title>MySQL 8 Component Architecture and Error Logging</title>
<link>https://www.navicat.com/company/aboutus/blog/706-mysql-8-component-architecture-and-error-logging.html</link>
<description><![CDATA[<b>April 24, 2018</b> by Robert Gravelle<br/><br/><p>One of the numerous significant changes to MySQL Server for version 8 includes a new component-based infrastructure. That will make the architecture more modular while allowing users to extend server capabilities through the addition of individual components.</p><p>Each component provides services that are available to the server as well as to other components. In fact, the server itself is now considered to be a component, equal to other components. Components interact with each other only through the services they provide.</p><h1 class="blog-sub-title">Enabling a Component</h1><p>Component loading and unloading are achieved via the INSTALL COMPONENT and UNINSTALL COMPONENT SQL statements. For example:</p><pre>INSTALL COMPONENT 'file://component_validate_password';UNINSTALL COMPONENT 'file://component_validate_password';</pre><p>A loader service handles component loading and unloading, and also lists loaded components in the mysql.component system table.</p><p>INSTALL COMPONENT loads components into the server and activates them immediately. The loader service also registers loaded components in the mysql.component system table. For subsequent server restarts, any components listed in mysql.component are loaded by the loader service during startup.</p><p>UNINSTALL COMPONENT deactivates components and unloads them from the server. The loader service also unregisters the components from the mysql.component system table so that they are no longer loaded during startup for subsequent server restarts.</p><p>To see which components are installed, use the statement:</p><pre>SELECT * FROM mysql.component;</pre><h1 class="blog-sub-title">Error Log Filtering and Routing</h1><p>Thanks to the new component architecture, log events can be filtered, and their output can be sent to multiple destinations in a variety formats, including JSON. Log events may even be routed to third-party products like the Navicat Monitor for additional processing and analysis.</p><p>Error log configuration is stored in the global log_error_services and log_error_verbosity variables, which are both stored in the <i>global_variables</i> table.  Error log variables are prefixed with <i>log_error_</i>, so we can fetch both as follows:</p><pre>mysql>select * from global_variables where VARIABLE_NAME like 'log_error_%';+---------------------+----------------------------------------+| VARIABLE_NAME       | VARIABLE_VALUE                         |+---------------------+----------------------------------------+| log_error_services  | log_filter_internal; log_sink_internal || log_error_verbosity | 2                                      |+---------------------+----------------------------------------+</pre><p>There are four available log components. These are stored in the lib/plugins directory, and have an extension of .so:</p><ul style="list-style-type: disc;"><li>component_log_filter_dragnet.so</li><br/><li>component_log_sink_json.so</li><br/><li>component_log_sink_syseventlog.so</li><br/><li>component_log_sink_test.so</li><br/></ul><p>Components can be subdivided into two types: filters and sinks.</p><ul style="list-style-type: disc"><li>Filter components implement filtering of error log events. If no filter component is enabled, no filtering occurs. Otherwise, any enabled filter component affects log events only for components listed later in the log_error_services variable.</li><br/><li>Error log sink components are writers that implement error log output. If no sink component is enabled, no log output occurs. Some sink component descriptions refer to the default error log destination. This is the console or a file and is indicated by the log_error system variable.</li><br/></ul><p>To load a component, you need to specify its URN. This is made up of:</p><p>file:// + [the filename without the .so extension]</p><p>For example, to load the writer to json component, you would enable it like this:</p><pre>mysql> INSTALL COMPONENT 'file://component_log_sink_json';mysql> SET GLOBAL log_error_services = 'log_filter_internal; log_sink_internal; log_sink_json';mysql> select * from global_variables where VARIABLE_NAME like 'log_error_%';+---------------------+-------------------------------------------------------+| VARIABLE_NAME       | VARIABLE_VALUE                                        |+---------------------+-------------------------------------------------------+| log_error_services  | log_filter_internal; log_sink_internal; log_sink_json || log_error_verbosity | 2                                                     |+---------------------+-------------------------------------------------------+</pre><p>Well explore the error logging in MySQL 8 in greater detail in future blogs!</p>]]></description>
</item>
<item>
<title>Disk Encryption in SQL Server</title>
<link>https://www.navicat.com/company/aboutus/blog/704-disk-encryption-in-sql-server.html</link>
<description><![CDATA[<b>April 17, 2018</b> by Robert Gravelle<br/><br/><p>In these turbulent times, encrypting your sensitive data only makes sense. The question is not so much whether to encrypt, but rather, which method of encryption to employ. There are several options, the three main widely used database encryption methods being:</p><ul style="list-style-type: decimal;"><li>Application Programming Interface (API)  application level</li><li>Plug-In  database level</li><li>Transparent Data Encryption  disk/OS level</li></ul><p>The closer we are to the application, the more source code changes are required. Conversely, the closer we get to the OS, less effort is required on the developers part. Disk encryption is also the most secure because even with access to the physical database server, a hacker cant read the data.</p><p>Implemented in SQL Server 2008, Azure SQL Database, and Azure SQL Data Warehouse data files, Microsofts Transparent Data Encryption (TDE) achieves this by encrypting the database as data is written to the disk. Likewise, data is unencrypted when read from the disk. Therefore, data is in an unencrypted state only when in memory.</p><p>By default, SQL Server does not encrypt data at all, let alone to disk. A few steps are required to activate it. In todays tip, well review how to turn on TDS in SQL Server.</p><ul style="list-style-type: decimal;"><h1 class="blog-sub-title"><li>Create a master key</li></h1><font face="courier new" color="blue">USE master;<br/>GO<br/>CREATE MASTER KEY ENCRYPTION<br/>     BY PASSWORD='<font color="red">Use a Strong Password For the Database Master Key'</font>;<br/>GO<br/></font><h1 class="blog-sub-title"><li>Create or obtain a certificate protected by the master key</li></h1><font face="courier new" color="blue">USE master;<br/>GO<br/> CREATE CERTIFICATE <font color="black">My_TDE_Certificate</font><br/>     WITH SUBJECT=<font color="red">'Certificate for TDE'</font>;<br/>GO<br/></font><h1 class="blog-sub-title"><li>Create a database encryption key and protect it by the certificate</li></h1><font face="courier new" color="blue">USE <font color="black">MyDatabase</font><br/>GO<br/>CREATE DATABASE ENCRYPTION KEY<br/>WITH ALGORITHM = AES_256<br/>ENCRYPTION BY SERVER CERTIFICATE <font color="black">TDE_Certificate;</font><br/></font><h1 class="blog-sub-title"><li>Set the database to use encryption</li></h1><font face="courier new" color="blue">ALTER DATABASE <font color="black">MyDatabase</font> SET ENCRYPTION ON;<br/>GO<br/></font></ul><h1 class="blog-sub-title">Backing up the Certificate</h1><p>Although this step is not required to encrypt a database using TDE, its vitally important that you can recover your encrypted data from a database backup, should your main database become corrupted. You should also backup the certificate if youd like to move an encrypted database to another server. Heres the code to accomplish the backup:</p><font face="courier new" color="blue">USE master;<br/>GO<br/>BACKUP CERTIFICATE <font color="black">TDE_CERT_For_MyData</font><br/>TO FILE = <font color="red">'C:\temp\TDE_Cert_For_MyData.cer'</font><br/>WITH PRIVATE KEY (file=<font color="red">'C:\temp\TDE_CertKey.pvk'</font>,<br/>ENCRYPTION BY PASSWORD=<font color="red">'Use a Strong Password for Backup Here'</font>);<br/></font><p><b>Make sure to store your backup password in a safe place.</b> You will need this password to restore the certificate if you have to rebuild the server instance that hosts your encrypted database or need to move your database to another server.</p><h1 class="blog-sub-title">Database Backups</h1><p>One of the benefits of encrypting a database using TDE is that database backups will also be encrypted, thus enhancing data security. Since SQL Server 2016 you can also apply compression to your TDE-enabled database backups. Compressing your database backups is important because it enables you to save disk space by generating a backup file that is smaller than the database. In addition, it shortens the time required to restore the database.</p><p>Heres how to apply compression to your TDE-enabled database backup:</p><font face="courier new">BACKUP DATABASE [MyDatabase]<br/>TO DISK = N'E:\backup\MyDatabase_TDE_Compressed.bak'<br/>WITH NOFORMAT, NOINIT, NAME = N'MyDatabase_TDE-Full Database Backup',<br/>SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 10<br/>GO<br/></font>]]></description>
</item>
<item>
<title>Get Row Counts from Multiple Tables and Views in MySQL (Part 3)</title>
<link>https://www.navicat.com/company/aboutus/blog/701-get-row-counts-from-multiple-tables-and-views-in-mysql-part-3.html</link>
<description><![CDATA[<b>April 10, 2018</b> by Robert Gravelle<br/><br/><p>In last weeks <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/697-getting-advanced-row-counts-in-mysql-part-2.html" target="_blank">Getting Advanced Row Counts in MySQL (Part 2)</a> blog we employed the native COUNT() function to tally unique values as well as those which satisfy a condition. In todays final third instalment, well learn how to obtain row counts from all of the tables within a database or entire schema.</p><h1 class="blog-sub-title">Querying the information_schema Database</h1><p>You dont have to run a count query against every table to get the number of rows. This would be tedious and likely require external scripting if you planed on running it more than once.</p><p>The INFORMATION_SCHEMA database is where each MySQL instance stores information about all the other databases that the MySQL server maintains. Also sometimes referred to as the data dictionary and system catalog, it's the ideal place to lookup information about databases, tables, the data type of a column, or access privileges.</p><p>The INFORMATION_SCHEMA TABLES table provides information aboutwhat elsetables in your databases. By querying it, you can get exact row counts with a single query.</p><p style="font-size: 18px">Table Counts for One Database</p><p>Its easy enough to obtain a row count for one database. Just add a WHERE clause with the condition that the <font face="courier new">table_schema</font> column matches your database name:</p><font face="courier new">SELECT<br/>&nbsp;&nbsp;&nbsp;&nbsp;TABLE_NAME,<br/>&nbsp;&nbsp;&nbsp;&nbsp;TABLE_ROWS<br/>FROM<br/>&nbsp;&nbsp;&nbsp;&nbsp;`information_schema`.`tables`<br/>WHERE<br/>&nbsp;&nbsp;&nbsp;&nbsp;`table_schema` = 'YOUR_DB_NAME';<br/></font><p></p><font face="monospace">+------------+------------+<br/><b>|&nbsp;TABLE_NAME&nbsp;|&nbsp;TABLE_ROWS&nbsp;|</b><br/>+------------+------------+<br/>|&nbsp;Table1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;105&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+------------+<br/>|&nbsp;Table2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;10299&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+------------+<br/>|&nbsp;Table3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+------------+<br/>|&nbsp;Table4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;1045&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+------------+<br/></font><p style="font-size: 18px">Table Counts for the Entire Schema</p><p>Obtaining a row count for all databases within a schema takes a little more effort. For that, we have to employ a prepared statement.</p><p>Within the statement, the group_concat() function packs multiple rows into a single string in order to turn a list of table names into a string of many counts connected by unions.</p><font face="courier new">Select<br/>&nbsp;&nbsp;-- Sort the tables by count<br/>&nbsp;&nbsp;concat(<br/>&nbsp;&nbsp;&nbsp;&nbsp;'select * from (',<br/>&nbsp;&nbsp;&nbsp;&nbsp;-- Aggregate rows into a single string connected by unions<br/>&nbsp;&nbsp;&nbsp;&nbsp;group_concat(<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-- Build a "select count(1) from db.tablename" per table<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;concat('select ',<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;quote(db), ' db, ',<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;quote(tablename), ' tablename, '<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;'count(1) "rowcount" ',<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;'from ', db, '.', tablename)<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;separator ' union ')<br/>&nbsp;&nbsp;&nbsp;&nbsp;, ') t order by 3 desc')<br/>into @sql<br/>from (<br/>&nbsp;&nbsp;select<br/>&nbsp;&nbsp;&nbsp;&nbsp;table_schema db,<br/>&nbsp;&nbsp;&nbsp;&nbsp;table_name tablename<br/>&nbsp;&nbsp;from information_schema.tables<br/>&nbsp;&nbsp;where table_schema not in<br/>&nbsp;&nbsp;&nbsp;&nbsp;('performance_schema', 'mysql', 'information_schema')<br/>) t;<br/></font><p>Our concatenated select statements are saved in the @sql variable so that we can run it as a prepared statement:</p><font face="courier new">-- Execute @sql<br/>prepare s from @sql; execute s; deallocate prepare s;<br/></font><font face="monospace">+-----+-----------+------------+<br/><b>|&nbsp;db&nbsp;&nbsp;|&nbsp;tablename&nbsp;|&nbsp;rowcount&nbsp;&nbsp;&nbsp;|</b><br/>+-----+-----------+------------+<br/>|&nbsp;DB1&nbsp;|&nbsp;Table1&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;1457&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+-----+-----------+------------+<br/>|&nbsp;DB1&nbsp;|&nbsp;Table2&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;1029&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+-----+-----------+------------+<br/>|&nbsp;DB2&nbsp;|&nbsp;Table1&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;22002&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+-----+-----------+------------+<br/>|&nbsp;DB2&nbsp;|&nbsp;Table2&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;1022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+-----+-----------+------------+<br/></font><h1 class="blog-sub-title">A Final Word regarding Speed and Accuracy</h1><p>These queries will perform very fast and produce extremely exact results on MyISAM tables. However, transactional storage engines such as InnoDB do not keep an internal count of rows in a table. Rather, transactional storage engines sample a number of random pages in the table, and then estimate the total rows for the whole table. The ramifications of MVCC, a feature that allows concurrent access to rows, are that, at any one point in time, there will be multiple versions of a row. Therefore, the actual <font face="courier new">count(1)</font> will be dependent on the time your transaction started, and its isolation level. On a transactional storage engine like InnoDB, you can expect counts to be accurate to within 4% of the actual number of rows.</p>]]></description>
</item>
<item>
<title>Getting Advanced Row Counts in MySQL (Part 2)</title>
<link>https://www.navicat.com/company/aboutus/blog/700-getting-advanced-row-counts-in-mysql-part-2.html</link>
<description><![CDATA[<b>April 4, 2018</b> by Robert Gravelle<br/><br/><p>In last weeks Getting Row Counts in MySQL blog we employed the native COUNT() functions different variations to tally the number of rows within one MySQL table.  In todays follow-up, well use the COUNT() function in more sophisticated ways to tally unique values as well as those which satisfy a condition.</p><h1 class="blog-sub-title">Distinct Counts</h1><p>The COUNT(DISTINCT) function returns the number of rows with unique non-NULL values. Hence, the inclusion of the DISTINCT keyword eliminates duplicate rows from the count. Its syntax is:</p><font face="courier new">COUNT(DISTINCT expr,[expr...])</font><p>As with the regular COUNT() function, the <i>expr</i> parameters above can be any given expression, including specific columns, all columns (*), function return values, or expression such as IF/CASE statements.</p><p style="font-size: 18px">A Simple Example</p><p>Say that we had the following table of clients:</p><font face="monospace">+------------+-------------+<br/><b>|&nbsp;last_name&nbsp;&nbsp;|&nbsp;first_name&nbsp;&nbsp;|</b><br/>+------------+-------------+<br/>|&nbsp;Tannen&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;Biff&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+-------------+<br/>|&nbsp;McFly&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;Marty&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+-------------+<br/>|&nbsp;Brown&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;Dr. Emmett&nbsp;&nbsp;|<br/>+------------+-------------+<br/>|&nbsp;McFly&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;George&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+-------------+<br/></font><p>Invoking COUNT(*) will return the number of all rows (4) while a COUNT DISTINCT on the last_name will count each row with a duplicated last name as one, so that we get a total of 3:</p><font face="courier new">SELECT COUNT(*), COUNT(DISTINCT last_name) FROM clients;</font><br/><font face="monospace">+----------+---------------------------+<br/><b>|&nbsp;COUNT(*)&nbsp;|&nbsp;COUNT(DISTINCT last_name)&nbsp;|</b><br/>+----------+---------------------------+<br/>|&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+----------+---------------------------+<br/></font><h1 class="blog-sub-title">Conditional Counts using Expressions</h1><p>As mentioned above, COUNT() function parameters are not limited to column names; function return values and expressions such as IF/CASE statements are also fair game.</p><p>Heres a table that contains several users telephone numbers and sex (limited to two for simplicity):</p><font face="monospace">+------------+---------+<br/><b>|&nbsp;tel&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;sex&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|</b><br/>+------------+---------+<br/>|&nbsp;7136609221&nbsp;|&nbsp;male&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+---------+<br/>|&nbsp;7136609222&nbsp;|&nbsp;male&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+---------+<br/>|&nbsp;7136609223&nbsp;|&nbsp;female&nbsp;&nbsp;|<br/>+------------+---------+<br/>|&nbsp;7136609228&nbsp;|&nbsp;male&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+---------+<br/>|&nbsp;7136609222&nbsp;|&nbsp;male&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+------------+---------+<br/>|&nbsp;7136609223&nbsp;|&nbsp;female&nbsp;&nbsp;|<br/>+------------+---------+<br/></font><p>Say that we wanted to build a query that told us how many distinct women and men there are in the table. The person is identified by their telephone ('tel') number. It is possible for the same 'tel' to appear multiple times, but that tels gender should only be counted one time.</p><p>Here's one option using a separate COUNT DISTINCT for each column:</p><font face="courier new">SELECT COUNT(DISTINCT tel) gender_count,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;COUNT(DISTINCT CASE WHEN gender = 'male' &nbsp;&nbsp;THEN tel END) male_count,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;COUNT(DISTINCT CASE WHEN gender = 'female' THEN tel END) female_count<br/>FROM people</font><p>This SELECT statement would yield the following:</p><font face="monospace">+--------------+------------+---------------+<br/><b>|&nbsp;gender_count&nbsp;|&nbsp;male_count&nbsp;|&nbsp;female_count&nbsp;&nbsp;|</b><br/>+--------------+------------+---------------+<br/>|&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+--------------+------------+---------------+<br/></font><h1 class="blog-sub-title">BONUS! Grouping and Including a Grand Total</h1><p>You can also stack counts vertically using GROUP BY:</p><font face="monospace">+---------+-------+<br/><b>|&nbsp;GroupId&nbsp;|&nbsp;Count&nbsp;|</b><br/>+---------+-------+<br/>|&nbsp;1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+---------+-------+<br/>|&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+---------+-------+<br/>|&nbsp;3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|&nbsp;7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+---------+-------+<br/>|&nbsp;Total:&nbsp;&nbsp;|&nbsp;11&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+---------+-------+<br/></font><p>The Total: was produced using the SQL GROUPING() function, which was added in MySQL 8.0.1. It distinguishes between a NULL representing the set of all values in a super-aggregate row (produced by a ROLLUP) from a NULL in a regular row.</p><p>Heres the full SQL:</p><font face="courier new">Select &nbsp;Case When Grouping(GroupId) = 1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Then 'Total:'<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Else GroupId<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;End As GroupId,<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Count(*) Count<br/>From &nbsp;&nbsp;&nbsp;user_groups<br/>Group By GroupId With Rollup<br/>Order By Grouping(GroupId), GroupId<br/></font><p>Next week, well obtain row counts from multiple tables and views.</p>]]></description>
</item>
<item>
<title>Getting Row Counts in MySQL (part 1)</title>
<link>https://www.navicat.com/company/aboutus/blog/695-getting-row-counts-in-mysql-part-1.html</link>
<description><![CDATA[<b>March 20, 2018</b> by Robert Gravelle<br/><br/><p>There are several ways to get a row count in MySQL.  Some database management products provide database statistics like table sizes, but it can also be done using straight SQL.  In todays tip, well use the native COUNT() function to retrieve the number of rows within one table or view within a MySQL database.  In part 2, well learn how to obtain a row count from multiple tables, or even from all of the tables within a database.</p><h1 class="blog-sub-title">The COUNT() Functions Many Forms</h1><p>You probably already know that the COUNT() function returns the number of rows in a table.  But theres a little more to it than that, as the COUNT() function can be utilized to count all rows in a table or only those rows that match a particular condition.  The secret is in the function signatures, of which there are several forms: COUNT(*), COUNT(expression) and COUNT(DISTINCT expression).</p><p>In each case, COUNT() returns a BIGINT that contains either the number of matching rows, or zero, if none were found.</p><h1 class="blog-sub-title">Counting all of the Rows in a Table</h1><p>To counts all of the rows in a table, whether they contain NULL values or not, use COUNT(*).  That form of the COUNT() function basically returns the number of rows in a result set returned by a SELECT statement.</p><font face="courier new">SELECT COUNT(*) FROM cities;</font><p>A statement like the one above that invokes the COUNT(*) function without a WHERE clause or additional columns, will perform very fast on MyISAM tables because the number of rows is stored in the table_rows column in the tables table of the information_schema database.</p><p>For transactional storage engines such as InnoDB, storing an exact row count is problematic because InnoDB does not keep an internal count of rows in a table.  If it did, concurrent transactions might see different numbers of rows at the same time. Consequently, SELECT COUNT(*) statements only count rows visible to the current transaction.  What that means is that running a query with COUNT(*) during a heavy workload could result in slightly inaccurate numbers.</p><h1 class="blog-sub-title">Counting only Non-null Rows with COUNT(expr)</h1><p>Passing nothing to COUNT() executes the COUNT(expr) version of the function, but sans parameter.  Invoking COUNT() in that way only returns rows which are not comprised of NULL values.  For example, say that we had a simple table called code_values:</p><font face="monospace">code_values<br/>+-------+<br/>|&nbsp;code&nbsp;&nbsp;|<br/>+-------+<br/>|&nbsp;1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+-------+<br/>|&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+-------+<br/>|&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+-------+<br/>|&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+-------+<br/></font><p>Selecting COUNT() from the table would return 2, even though there are 4 rows:</p><font face="courier new">SELECT COUNT(*) FROM code_values;</font><p/><font face="monospace">+---------+<br/>|&nbsp;COUNT()&nbsp;|<br/>+---------+<br/>|&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|<br/>+---------+</font><p>Note that this version of the COUNT() function is rarely used because NULL rows should not be an issue in normalized databases, a condition that could only happen if the table didn't have a primary key.  In most cases, COUNT(*) will work just fine.</p><p>Of course, COUNT(expr) does accept proper expressions.  Heres another query that fetches NULL and non-NULL rows alike:</p><font face="courier new">SELECT COUNT(IFNULL(code, 1)) FROM code_values;</font><p/><p style="font-size: 18px">Counting Non-null Values</p><p>The COUNT(expr) version of the COUNT function also accepts individual column names, the effect of which is that COUNT(<i>column_name</i>) will return the number of records where <i>column_name</i> is not NULL.  Hence, the following SELECT query would fetch the number of rows where the description column contained a non-NULL value:</p><font face="courier new">SELECT COUNT(description) FROM widgets;</font><p>In Part 2 well learn how to use the COUNT(DISTINCT expression) signature as well as how to obtain a row count from multiple tables.</p>]]></description>
</item>
<item>
<title>Using Navicat Code Snippets</title>
<link>https://www.navicat.com/company/aboutus/blog/693-using-navicat-code-snippets.html</link>
<description><![CDATA[<b>March 14, 2018</b> by Robert Gravelle<br/><br/><p>When the Non-Essentials edition of Navicat Premium introduced the Code Snippets feature, writing queries against your preferred database type became easier than ever before. The Code Snippets feature allows you to insert reusable code into your SQL statements when working in the SQL Editor. Besides gaining access to a collection of built-in snippets, you can also define your own snippets. Today's blog will provide an overview of this exciting new feature.</p><h1 class="blog-sub-title">The Code Snippet Pane</h1><p>Located on the right-hand side of the SQL Editor, the Code Snippets Pane provides an easy way to insert reusable code into SQL statements when working in the SQL Editor. If the editor window is docked to the Navicat main window, you can click the <strong style="font-family:courier new;color:blue;">()</strong> icon in the Information pane (#1 in the image below) to view the snippets library.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180314/code_snippet_pane.jpg" style="max-width: 100%;"></td></tr><p>The snippets library includes built-in and user-defined snippets. If you would like to reduce the number of entries in the list, you can enter a search string in the Search box to filter the list (#2 in the above image). You can also show the available snippets according to your database type or for all database types.  Right-click anywhere on the library and select "Show Snippets For Other Database Type" from the popup menu to either show or hide snippets for other database types. A checkmark beside the item indicates that "Show Snippets For Other Database Type" is active.</p><h1 class="blog-sub-title">Inserting a Snippet into a Query</h1><p>There are two ways to insert a snippet into the editor:</p><ul style="list-style-type: decimal;"><li>Just start typing the name of a snippet in the editor. Smart Code Completion will present a list of suggestions  for you to choose from. Select a snippet name from the list to insert it in the editor.</li><br/><img src="https://www.navicat.com/link/Blog/Image/2018/20180314/code_completion.jpg" style="max-width: 100%;"></li><br/><br/><li>You can also drag and drop a snippet from the library into the editor. After inserting the snippet with placeholders in the editor, you can navigate between them by clicking on a placeholder and then using the TAB key to iterate over each.  Typing while a placeholder is selected will overwrite it with the typed characters.</li></ul><h1 class="blog-sub-title">Adding a Snippet</h1><p>You can create your own code snippets and add them to the library. To add a code snippet, select your desired code in the editor, then right-click and select "Create Snippet" from the popup menu.</p><p>Alternatively, you can click the Create Snippet in the Code Snippet pane (#3 in the Code Snippet pane image above) to open the New Snippet dialog directly. You may then assign a title and label type to your snippet. Type the code in the New Snippet window and then click the Save button to save your code snippet to the library.</p><h1 class="blog-sub-title">Editing a Snippet</h1><p>Double-clicking a snippet in the library opens the Edit dialog. Built-in snippets are read-only, as is indicated in the dialog title.</p><br/><img src="https://www.navicat.com/link/Blog/Image/2018/20180314/edit_dialog_for_preset_snippet.jpg" style="max-width: 100%;"><p>Moreover, you can hide the built-in snippets in the Snippets Pane by right-clicking anywhere on the library and disabling the "Show Preset Snippets" item in the popup menu.</p>]]></description>
</item>
<item>
<title>Navicat Query Builder: Setting Grouping Criteria (Part 5)</title>
<link>https://www.navicat.com/company/aboutus/blog/690-navicat-query-builder-setting-grouping-criteria-part-5.html</link>
<description><![CDATA[<b>March 6, 2018</b> by Robert Gravelle<br/><br/><p>Available in Non-Essentials editions of Navicat for MySQL, PostgreSQL, SQLite, MariaDB, and Navicat Premium, the Query Builder is a tool for creating and editing queries visually. Part 4 described how to include native SQL aggregate functions in your queries to display column statistics. This installment describes how to use the Query Builder to filter grouped data based on a HAVING condition.</p><h1 class="blog-sub-title">About the Sakila Sample Database</h1><p>The query that we'll be building here today will run against the <a class="default links" href="http://dev.mysql.com/doc/sakila/en/index.html" target="_blank">Sakila sample database</a>. It contains a number of tables themed around the film industry that cover everything from actors and film studios to video rental stores. Please refer to the <a class="default links" href="http://www.databasejournal.com/features/mysql/generating-reports-on-mysql-data.html" target="_blank">Generating Reports on MySQL Data</a> tutorial for instructions on downloading and installing the Sakila database.</p><h1 class="blog-sub-title">Filtering Result Groups with the HAVING Clause</h1><p>The SQL HAVING clause is used in combination with the GROUP BY clause to restrict the groups of returned rows based on one or more criteria. In contrast to the WHERE clause, which is applied before the GROUP BY clause, the HAVING clause applies a filter to rows AFTER they have been aggregated by the the GROUP BY clause.</p><h1 class="blog-sub-title">Determining How Many Actors Share the Same Last Name</h1> <p>If we wanted to know how many actors in our database share the same last name with at least two other actors, we could use the GROUP BY clause to aggregate actors according to the last_name field of the actors table.</p><p>I find that whether I'm constructing a query using the Query Editor or the Query Builder, it's best to choose the tables first.</p><ul style="list-style-type: decimal;"><li>With that in mind, open the Query Builder, click on the &lt;Click here to add tables&gt; label beside the FROM keyword, and choose the sakila.actor table from the list:<br><img src="https://www.navicat.com/link/Blog/Image/2018/20180306/selecting the actor table.jpg" style="max-width: 100%;"></li><br><li>That will cause the actor table to appear in the top pane along with all of its fields. We will require two fields: the last_name and a count of rows.  Click the box beside the last_name field in the table:<br><img src="https://www.navicat.com/link/Blog/Image/2018/20180306/last_name field selected.jpg" style="max-width: 100%;"><p>To add the Count function to the field list, click on the &lt;Click here to add fields&gt; label underneath the sakila.last_name field in the SQL statement and enter "Count(*)" in the Edit tab of the popup dialog:<img src="https://www.navicat.com/link/Blog/Image/2018/20180306/adding the count function.jpg" style="max-width: 100%;"></p></li><li>The next step is to add the GROUP BY clause. To do that, click the &lt;Click here to add GROUP BY&gt; label and choose the sakila.last_name field from the popup dialog.</li><br><li>Click the OK button to close the Query Builder.</li></ul><p>That will add the following SQL to the Query Editor:</p><pre>SELECTactor.last_name,Count(*) AS last_name_countFROMactorGROUP BYactor.last_name</pre><p>Here are the results produced by the above query:</p><img src="https://www.navicat.com/link/Blog/Image/2018/20180306/query results grouped by actor last_name.jpg" style="max-width: 100%;"><p>As you can see, the results are grouped and sorted by last_name.  What it doesn't do is limit the results to those actors who share their last name with at least two other actors. To do that we need to add the HAVING clause.</p><ul style="list-style-type: decimal;"><li>Reopen the Query Builder and click the &lt;Click here to add conditions &gt; label beside the HAVING keyword. That will insert an "&lt;--&gt; = &lt;--&gt;" expression label.</li><br><li>Click on the "&lt;--&gt; label on the left-hand side of the expression. The last_name_count field doe not appear in the field list because it only contains table fields. Therefore, enter it in the Edit tab:<br><img src="https://www.navicat.com/link/Blog/Image/2018/20180306/entering the last_name field.jpg" style="max-width: 100%;"></li><li>Next, click on the equals "=" label to enter the comparison operator.  Choose the "greater than or equal to" (>=) operator from the list:<br><img src="https://www.navicat.com/link/Blog/Image/2018/20180306/selecting the greater than or equal to comparison operator.jpg" style="max-width: 100%;"></li><li>finally, click on the "&lt;--&gt; label on the right-hand side of the expression and enter a value of "3" in the Edit tab.</li><br><li>Click the OK button to close the Query Builder.</li></ul><p>That will add the "HAVING last_name_count >= 3" expression to the query so that, this time, the query only shows actors whose last names appear three or more times in the table:</p><img src="https://www.navicat.com/link/Blog/Image/2018/20180306/query results with having clause.jpg" style="max-width: 100%;">]]></description>
</item>
<item>
<title>Navicat Query Builder- Working with Aggregated Output Fields (Part 4)</title>
<link>https://www.navicat.com/company/aboutus/blog/688-navicat-query-builder-working-with-aggregated-output-fields-part-4.html</link>
<description><![CDATA[<b>February 27, 2018</b> by Robert Gravelle<br/><br/><p>In addition to fetching individual values, the SELECT statement is also able to aggregate data elements based on one or more columns. This installment on the Navicat Query Builder describes how to include native SQL aggregate functions in your queries to display column statistics.</p><h1 class="blog-sub-title">About the Sakila Sample Database</h1><p>As with previous installments, the queries that we'll be building here today will run against the <a class="default links" href="http://dev.mysql.com/doc/sakila/en/index.html" target="blank">Sakila sample database</a>. It contains a number of tables themed around the film industry that cover everything from actors and film studios to video rental stores. Please refer to the <a class="default-links" href="http://www.databasejournal.com/features/mysql/generating-reports-on-mysql-data.html" target="blank">Generating Reports on MySQL Data</a> tutorial for instructions on downloading and installing the Sakila database.</p><h1 class="blog-sub-title">Using Aggregate Functions</h1><p>In SQL, output fields may be passed to aggregate functions to produce statistics for the column data. Aggregate functions include COUNT, MAX, MIN, SUM, and AVG:</p><ul style="list-style-type: disc;"><li>COUNT(): Returns the number of rows containing non-NULL values in the specified field.</li><li>SUM(): Returns the sum of the non-NULL values in the specified field.</li><li>AVG(): Returns the average of the non-NULL values in the specified field.</li><li>MIN(): Returns the minimum of the non-NULL values in the specified field.</li><li>MAX(): Returns the maximum of the non-NULL values in the specified field.</li></ul><p>As touched upon in <a class="default-links" href="https://navicat.com/en/company/aboutus/blog/680-navicat-query-builder-field-selection-part-2.html" target="blank">Part 2</a>, clicking the &lt;func&gt; modifier the the left of an output field in the Navicat Query Builder opens a list of SUM, MAX, MIX, AVG, and COUNT aggregate functions. Selecting the desired function from the list will insert it into the query:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180227/aggregate function list.jpg" style="max-width: 100%;"></td></tr><p>Here is a query that uses aggregate functions to display the number of films, average film length, total film length, as well as the minimum and maximum rental rates:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180227/aggregate function query results.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Setting Grouping Criteria</h1><p>The above results pertain to the entire table. It is also possible to group records by one or more columns using the GROUP BY clause.</p><p>Let's design a query to show a count of rented films by month. In the Query Builder:</p><ul style="list-style-type: decimal;"><li>Drag the film and rental tables into the editor.</li><li>Join the two tables on the film.film_id and rental.inventory_id fields by dragging the former over to the latter.</li><li>Add an output field. In the editor, enter "MONTHNAME(rental_date)".</li><li>Click on the &lt;Alias&gt; label and enter a value of "rental_month".</li><li>Add a second field. This time, select rental_id from the field list.</li><li>Click the &lt;Func&gt; label and choose COUNT from the list.</li><li>Click on the &lt;Alias&gt; label and enter an Alias of "rental_count".</li><li>Click on the &lt;Click here to add GROUP BY&gt; label and use the editor to enter "MONTH(rental_date)".<br>      <br>The Query Builder should now look like this:<br><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180227/query builder with aggregate query.jpg" style="max-width: 100%;"></td></tr></li><li>Click OK to close the Query Builder and return to the Query Editor.</li></ul><p>Run the query to view the results:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180227/aggregate query results.jpg" style="max-width: 100%;"></td></tr><p>Notice how applying the MONTHNAME function on the rental_month output field displays the full month name rather than the month number as the MONTH() function does. In any event, either function could be employed to group results by month.</p>]]></description>
</item>
<item>
<title>Determine How Much Data Has Changed Since Your Last Full Backup on SQL Server</title>
<link>https://www.navicat.com/company/aboutus/blog/687-determine-how-much-data-has-changed-since-your-last-full-backup-on-sql-server.html</link>
<description><![CDATA[<b>February 23, 2018</b> by Robert Gravelle<br/><br/><p>It has become widespread knowledge far beyond Database Administrator (DBA) circles that one of the best ways to safeguard against data loss, corruption, and disasters - both man-made and natural - is by performing backups on a regular basis. The most common backup types are a full, incremental and differential. In particular, differential backups have played an increasingly important role in the backup policies of businesses, especially for those running large databases. One of the challenges presented by differential backup is that it can be difficult to determine how much data has changed since the last full backup. Answering this question is crucial in deciding whether to take a Full or Differential backup. In this tip, we will see how SQL Server 2017 helps solve this problem.</p><h1 class="blog-sub-title">Backup Types Explained</h1><p>Before we get into the specifics of determining how much data has changed since the last full backup, let's take a moment to review the three main types of backup.</p><p style="font-size: 18px">Full backups</p><p>The most basic type of backup is a full backup. As the name implies, this type of backup copies of all data to another database or storage media. Backing up the entire data set makes restoring it fairly trivial if the need should ever arise. However, performing a full backup can take a very long time, depending on how much data there is, and requires ample space to store it.</p><p style="font-size: 18px;">Incremental backups</p><p>In an incremental backup, only the data that has changed since the last backup is copied. A timestamp is typically employed and compared to the timestamp of the last backup. Because an incremental backup will only copy data since the last backup, it may be run as often as desired. The benefit of incremental backups is that they copy a smaller amount of data than a full backup. Thus, incremental backups will complete faster, and require less media to store the backed up data.</p><p style="font-size: 18px;">Differential backups</p><p>A differential backup is similar to an incremental one the first time it is performed, in that it will copy all data changed since the previous backup. However, on successive runs, it will continue to copy all data changed since the previous full backup. Thus, it will store more data than an incremental one on subsequent runs, although typically far less than a full backup.</p><h1 class="blog-sub-title">Pages, Extents and Dynamic Management Views (DMVs) in SQL Server</h1><p>The fundamental unit of data storage in SQL Server is the page. The disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into 8 KB pages numbered contiguously from 0 to n. Disk I/O operations are performed at the page level. That is to say, SQL Server reads or writes whole data pages. At 8 KB per page, this means SQL Server databases have 128 pages per megabyte.</p><p>Extents are the basic unit in which space is managed. An extent is eight physically contiguous pages, or 64 KB. This means SQL Server databases have 16 extents per megabyte.</p><p>DMVs and functions return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. The SQL Server 2017 version of the DMV sys.dm_db_file_spavce_usage has a new column named modified_extent_page_count. This new column shows the number of pages that have changed since the last full backup. For example:</p><pre>SELECT total_page_count, allocated_extent_page_count     , unallocated_extent_page_count, modified_extent_page_countFROM Sys.dm_db_file_space_usageGO</pre><p>Here is what running the above query right after the full backup might produce:</p><font face="courier new"><body><table border="0"><tr><td>total_page_count</td><td>&nbsp;&nbsp;</td><td>allocated_extent_page_count</td><td>&nbsp;&nbsp;</td><td>unalocated_extent_page_count</td><td>&nbsp;&nbsp;</td><td>modified_extent_page_count</td></tr><tr><td colspan="7">-------------------------------------------------------------------------------------------------------</td></tr><tr><td>1024</td><td></td><td>320</td><td></td><td>704</td><td></td><td>64</td></tr></table></body></font><p>If we were now to create a new table and insert a row, when we rerun the query, we would now get a modified output:</p><font face="courier new"><body><table border="0"><tr><td>total_page_count</td><td>&nbsp;&nbsp;</td><td>allocated_extent_page_count</td><td>&nbsp;&nbsp;</td><td>unalocated_extent_page_count</td><td>&nbsp;&nbsp;</td><td>modified_extent_page_count</td></tr><tr><td colspan="7">-------------------------------------------------------------------------------------------------------</td></tr><tr><td>1024</td><td></td><td>320</td><td></td><td>704</td><td></td><td>128</td></tr></table></body></font><p>You can see that the modified_extent_page_count has gone from 64 to 128. In the next blog, we'll learn how to interpret these results.</p>]]></description>
</item>
<item>
<title>Navicat Query Builder -  Filtering Results (Part 3)</title>
<link>https://www.navicat.com/company/aboutus/blog/683-navicat-query-builder-filtering-results-part-3.html</link>
<description><![CDATA[<b>February 13, 2018</b> by Robert Gravelle<br/><br/><p>Available in Non-Essentials editions of Navicat for MySQL, PostgreSQL, SQLite, MariaDB, and Navicat Premium, the Query Builder is a tool for creating and editing queries visually. In Part 1, we used it to write a query to fetch a list of actors that appeared in movies released during a given year. Part 2 was all about field selection. Today's blog will provide an overview on adding WHERE criteria to a SELECT query using the Navicat Premium Query Builder.</p><h1 class="blog-sub-title">About the Sakila Sample Database</h1><p>As with parts 1 and 2, the queries that we'll be building here today will run against the <a class="default-links" href="http://dev.mysql.com/doc/sakila/en/index.html" target="blank">Sakila sample database</a>. It contains a number of tables themed around the film industry that cover everything from actors and film studios to video rental stores. Please refer to the <a class="default-links" href="http://www.databasejournal.com/features/mysql/generating-reports-on-mysql-data.html" target="blank">Generating Reports on MySQL Data</a> tutorial for instructions on downloading and installing the Sakila database.</p><h1 class="blog-sub-title">Using the WHERE Clause</h1><p>The WHERE clause is the section of a SELECT query that filters the results based on a set of criteria. It's useful in reducing the number of rows returned by specifying the subset of records that we're interested in. For instance, taking our query from part 1 that produced a list of actors that appeared in movies released during a given year, it still returned almost one thousand rows. One way to further limit the number of rows returned would be to include only certain actors that we wanted information on.</p><p>Add the following SQL to the Navicat Premium Query Editor and click the Query Builder button to display it in the Query Builder:</p><pre>SELECTfilm.title,film.film_id,film.release_year,concat('$', film_list.price) AS price,film_list.actorsFROMfilmINNER JOIN film_list ON film.film_id = film_list.FID</pre><p>Beside the WHERE clause you'll see the label "&lt;Click here to add conditions&gt;". In the Query Builder, all labels within "&lt;...&gt;" brackets are clickable and open a context-specific list and/or editor. Clicking the "&lt;Click here to add conditions&gt;" label changes the text to the "&lt;--&gt; = &lt;--&gt;" expression. It's actually three different clickable regions:</p><ul style="list-style-type: decimal;"><li>The left-hand field/expression: "&lt;--&gt;"</li><li>The comparison operator: "="</li><li>The right-hand field/expression: "&lt;--&gt;"</li></ul><p>Let's proceed to fill out the expression from left to right, as we would in writing a query by hand.</p><p>We can search the actors field using a Like expression. Click on the "&lt;--&gt;" label to the left of the equals sign ("=") and select the film_list.actors item from the field list tab in the popup dialog (it's the last one):</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180213/field list.jpg" style="max-width: 100%;"></td></tr><p>Now click the equals sign ("="). That opens a list of comparison operators to choose from. Select the "Like" operator:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180213/comparison operator list.jpg" style="max-width: 100%;"></td></tr><p>Next, we'll enter the actor that we're looking for. Click the "&lt;--&gt;" label to the right of the equals sign ("=") and enter "'%GENE HOPKINS%'" (without the double quotes) in the Edit tab:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180213/edit tab.jpg" style="max-width: 100%;"></td></tr><p>With our WHERE criteria set, click the Query Builder's OK button to close the dialog. You'll see that the "WHERE film_list.actors LIKE '%GENE HOPKINS%'" line has been appended to the SELECT statement in the Query Editor.</p><p>Run the query and verify that all 22 rows list GENE HOPKINS as one of the film's actors:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180213/query results.jpg" style="max-width: 100%;"></td></tr>]]></description>
</item>
<item>
<title>Eliminating Duplicate Rows using SELECT DISTINCT in MySQL (Part 4)</title>
<link>https://www.navicat.com/company/aboutus/blog/681-eliminating-duplicate-rows-using-select-distinct-in-mysql-part-4.html</link>
<description><![CDATA[<b>January 30, 2018</b> by Robert Gravelle<br/><br/><p>MySQL offers a virtually endless variety of options for dealing with duplicated data. Most can be updated or removed using one statement. However, there are times when multiple commands must be issued to get the job done. Todays blog will present a solution that employs a temporary table along with a SELECT DISTINCT query.</p><h1 class="blog-sub-title">Permanent vs. Temporary Tables in MySQL</h1><p>It should be noted that the temporary table that we will be creating here today differs from a true temporary table in MySQL, in that we are not adding the TEMPORARY keyword to the CREATE TABLE statement.</p><p>In MySQL, a temporary table is a special type of table that allows you to store a temporary result set, which you can reuse several times in a single session. A temporary table comes in handy when its impossible or expensive to query data using a single SELECT statement. Like a temporary table created using the TEMPORARY keyword, our temporary table will store the immediate result of a SELECT query, so that we can issue one or more additional queries to fully process the data. We will then replace the target table with the temp table.</p><h1 class="blog-sub-title">Removing Duplicate Rows from the amalgamated_actors Table</h1><p>In the How to Delete Duplicate Rows with Different IDs in MySQL (Part 3) blog, we successfully removed rows that contained duplicate names. However, that still left rows whose IDs and names were the same, in other words, where entire rows were duplicated. For instance, we can see in the result set below that <font face="courier new" style="font-size: 15px">22&nbsp;&nbsp;JENNIFER&nbsp;&nbsp;DAVIS</font> appears twice:</p><font face="courier new"><body><table border="0"><tr><td><b>id</b></td><td>&nbsp;&nbsp;&nbsp;</td><td><b>first_name</b></td><td>&nbsp;&nbsp;&nbsp;</td><td><b>last_name</b></td></tr><tr><td colspan="5"><b>---------------------------------------------------</b></td></tr><tr><td>10</td><td></td><td>PENELOPE</td><td></td><td>GUINESS</td></tr><tr><td>14</td><td></td><td>ED</td><td></td><td>CHASE</td></tr><tr style="color:#ff0000;"><td>22</td><td></td><td>JENNIFER</td><td></td><td>DAVIS</td></tr><tr><td>23</td><td></td><td>JOHNNY</td><td></td><td>LOLLOBRIGIDA</td></tr><tr><td>27</td><td></td><td>BETTE</td><td></td><td>NICHOLSON</td></tr><tr><td>34</td><td></td><td>GRACE</td><td></td><td>MOSTEL</td></tr><tr><td>41</td><td></td><td>NICK</td><td></td><td>WAHLBERG</td></tr><tr><td>39</td><td></td><td>JOE</td><td></td><td>SWANK</td></tr><tr><td>23</td><td></td><td>CHRISTIAN</td><td></td><td>GABLE</td></tr><tr style="color:#ff0000;"><td>22</td><td></td><td>JENNIFER</td><td></td><td>DAVIS</td></tr></table></body></font><p>This is an ideal candidate for the temp table approach.</p><p>MySQL offers the special CREATE TABLE ... LIKE command to create an empty table based on the definition of another table, including any column attributes and indexes defined in the original table.</p><p>Hence, we can create a table based on the <font face="courier new" style="font-size: 15px">amalgamated_actors</font> table like so:<p style="font-size: 15px"><font face="courier new">-- Create temporary table<br>CREATE TABLE wp.temp_table LIKE wp.amalgamated_actors;</font></p><p>Heres the statement to copy all of the data from the <font face="courier new" style="font-size: 15px">amalgamated_actors</font> table into <font face="courier new" style="font-size: 15px">temp_table</font>:</p><p style="font-size: 15px"><font face="courier new">INSERT INTO wp.temp_table<br>&nbsp;&nbsp;&nbsp;&nbsp;SELECT DISTINCT * FROM wp.amalgamated_actors;</font></p><p>The SELECT DISTINCT clause is key to removing duplicate rows.</p><p>Finally, we need to rename the original table, so that we can replace it with the temp table, and drop the original table:</p><p style="font-size: 15px"><font face="courier new">-- Rename and drop<br>RENAME TABLE wp.amalgamated_actors TO wp.old_amalgamated_actors,<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;wp.temp_table TO wp.amalgamated_actors;<br><br>DROP TABLE wp.old_amalgamated_actors;</font></p><p>Now there is only one row with <font face="courier new" style="font-size: 15px">JENNIFER DAVIS</font>:</p><font face="courier new"><body><table border="0"><tr><td><b>id</b></td><td>&nbsp;&nbsp;&nbsp;</td><td><b>first_name</b></td><td>&nbsp;&nbsp;&nbsp;</td><td><b>last_name</b></td></tr><tr><td colspan="5"><b>---------------------------------------------------</b></td></tr><tr><td>10</td><td></td><td>PENELOPE</td><td></td><td>GUINESS</td></tr><tr><td>14</td><td></td><td>ED</td><td></td><td>CHASE</td></tr><tr><td>22</td><td></td><td>JENNIFER</td><td></td><td>DAVIS</td></tr><tr><td>23</td><td></td><td>JOHNNY</td><td></td><td>LOLLOBRIGIDA</td></tr><tr><td>27</td><td></td><td>BETTE</td><td></td><td>NICHOLSON</td></tr><tr><td>34</td><td></td><td>GRACE</td><td></td><td>MOSTEL</td></tr><tr><td>41</td><td></td><td>NICK</td><td></td><td>WAHLBERG</td></tr><tr><td>39</td><td></td><td>JOE</td><td></td><td>SWANK</td></tr><tr><td>23</td><td></td><td>CHRISTIAN</td><td></td><td>GABLE</td></tr></table></body></font><p>Our original <font face="courier new" style="font-size: 15px">amalgamated_actors</font> table is no more, having been replaced by the temp table.</p><h1 class="blog-sub-title">Removing Duplicate Rows using the UNIQUE Constraint</h1><p>In the next blog on handling duplicate data, well employ the UNIQUE constraint to delete rows with duplicate name fields, regardless of whether or not the IDs are duplicated.</p>]]></description>
</item>
<item>
<title>Navicat Query Builder- Field Selection (Part 2)</title>
<link>https://www.navicat.com/company/aboutus/blog/680-navicat-query-builder-field-selection-part-2.html</link>
<description><![CDATA[<b>January 24, 2018</b> by Robert Gravelle<br/><br/><p>Available in Non-Essentials editions of Navicat for MySQL, PostgreSQL, SQLite, MariaDB, and Navicat Premium, the Query Builder allows anyone to create and edit queries with only a cursory knowledge of SQL.  In Part 1, we used it to write a query to fetch a list of actors that appeared in movies released during a given year. Today's blog will provide a more detailed overview on selecting output fields.</p><h1 class="blog-sub-title">Today's Query</h1><p>The query that we'll be building here today will again run against the <a class="default-links" href="http://dev.mysql.com/doc/sakila/en/index.html" target="_blank">Sakila sample database</a>.  It contains a number of tables themed around the film industry that cover everything from actors and film studios to video rental stores.  Please refer to the <a class="default-links" href="http://www.databasejournal.com/features/mysql/generating-reports-on-mysql-data.html" target="_blank">Generating Reports on MySQL Data</a> tutorial for instructions on downloading and installing the Sakila database.</p><p>Much like the previous blog, we will be building a query to fetch a list of actors that appeared in movies released during a given year. The difference is that this time we will make use of a view that lists actors for each title as a comma-delimited list.</p><h1 class="blog-sub-title">Setting Field Associations</h1><p>Dragging a table/view from the left pane to the Diagram Design pane or double-clicking it adds the table or view to query. The Query Builder will automatically include entity relationships where foreign key constraints have been declared. In this case, we'll be needing the film table and film_list view. They do not have a defined association, so we have to add one ourselves. To do that, just drag one field from one object to another and a line will appear between the linked fields - i.e. between film.film_id and film_list.FID.</p><p> The Query Builder will not only draw the association between the objects, but it will also add an INNER JOIN to the query:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180124/tables with inner join.jpg" style="max-width: 100%;"></td></tr><p>With the tables/views selected, we are ready to choose out output fields.</p><p>Click the checkbox beside each field that you want to appear in your query results - i.e. film.title, film.film_id, film.release_year, and film_list.actors.</p><p>The fields you have selected in the Diagram Design pane will then be displayed in the Syntax pane, where then may then be modified clicking on the &lt;Distinct&gt;, &lt;func&gt; and &lt;Alias&gt; modifiers.</p><h1 class="blog-sub-title">Using Functions</h1><p>Clicking the &lt;func&gt; modifier opens a list of SUM, MAX, MIX, AVG, and COUNT aggregate functions. You may also enter another function via the Edit tab.  For example, we could select the film_list.price field and enter "concat('$', film_list.price)" in the Edit tab to format the price. We can also move the field position by dragging it - for instance, before the actor list:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180124/custom function.jpg" style="max-width: 100%;"></td></tr><p style="font-size: 18px">Field Aliases</p><p>When using functions, it's always a good idea to choose a more descriptive field name using an alias.  For example, in this case we can simply go with the original field name of "price":</p><tr><td><img src="https://www.navicat.com/link/Blog/Image/2018/20180124/setting the field alias.jpg" style="max-width: 100%;"></td></tr><p>Here is the final query produced by the Query Builder:</p><pre>SELECTfilm.title,film.film_id,film.release_year,concat('$', film_list.price) AS price,film_list.actorsFROMfilmINNER JOIN film_list ON film.film_id = film_list.FID</pre><p>And here are the results:</p><tr><td><img src="https://www.navicat.com/link/Blog/Image/2018/20180124/results.jpg" style="max-width: 100%;"></td></tr>]]></description>
</item>
<item>
<title>How to Delete Duplicate Rows with Different IDs in MySQL (Part 3)</title>
<link>https://www.navicat.com/company/aboutus/blog/679-how-to-delete-duplicate-rows-with-different-ids-in-mysql-part-3.html</link>
<description><![CDATA[<b>January 16, 2018</b> by Robert Gravelle</br><br/><p>The majority of duplicate records fall into one of two categories: Duplicate Meaning and Non-unique Keys. The <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/671-how-to-spot-and-delete-values-with-duplicate-meaning-in-mysql-part-1.html" target="blank">How to Spot and Delete Values with Duplicate Meaning in MySQL</a> blog dealt with Duplicate Meaning; the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/672-how-to-identify-duplicates-with-non-unique-keys-part-2.html" target="blank">follow-up</a> addressed how to identify Non-unique Keys. Thats where two records in the same table have the same key, but may or may not have different values and meanings.  Todays blog will cover how to delete rows with duplicated data, but with different keys.</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180116/duplicates3.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Identifying Duplicates by Type</h1><p>The last query presented in the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/672-how-to-identify-duplicates-with-non-unique-keys-part-2.html" target="blank">How to Identify Duplicates with Non-unique Keys in MySQL blog</a> listed all the duplicates in a format that was easy to visually scan through:</p><font face="courier New"><body><table border="0"><tr><td><b>Repetitions</b></td><td>&nbsp;&nbsp;&nbsp;</td><td><b>row_data</b></td></tr><tr><td colspan="3"><b>----------------------------------------------------------------</b></td></tr><tr><td>2</td><td/><td>22 (DAVIS, JENNIFER) | 22 (DAVIS, JENNIFER)</td></tr><tr><td>2</td><td/><td>23 (LOLLOBRIGIDA, JOHNNY) | 23 (GABLE, CHRISTIAN)</td></tr><tr><td>2</td><td/><td>41 (WAHLBERG, NICK) | 12 (WAHLBERG, NICK)</td></tr></table></body></font><p>Having identified all of the duplicated keys and values, we can decide how best to deal with the redundant data.</p><p>JENNIFER DAVIS appears in two records with the same key of 22, making those rows exact duplicates. Nick Walbergs name fields are duplicated, but the IDs are not.  There is also a duplicated key that is associated with two unrelated actors: #23 for JOHNNY LOLLOBRIGIDA and CHRISTIAN GABLE.  With regards to the duplicated keys of 22 and 23, the first is a true duplicate, whereas the second only needs a new key to be generated for one of the records.</p><h1 class="blog-sub-title">Deleting Rows using DELETE JOIN</h1><p>In the <a class="default-links" href="https://www.navicat.com/en/company/aboutus/blog/671-how-to-spot-and-delete-values-with-duplicate-meaning-in-mysql-part-1.html" target="blank">How to Spot and Delete Values with Duplicate Meaning in MySQL</a> blog, we removed duplicates from SELECT result sets by performing a Search &amp; Replace on values.  Here we will permanently delete one of the duplicated rows using the DELETE JOIN statement.</p><p>Since we are comparing fields from the same table, we have to join the table to itself.  We can choose to keep either the lower or higher id number by comparing the ids in the WHERE clause.  The following statement keeps the highest id:</p><pre>DELETE a FROM wp.amalgamated_actors a  INNER JOIN wp.amalgamated_actors a2WHERE a.id < a2.idAND   a.first_name = a2.first_nameAND   a.last_name  = a2.last_name;1 row(s) affected0.093 sec</pre><p>In this case, the affected (deleted) row is NICK WAHLBERG with an id of 12. A quick SELECT confirms the result:</p><font face="courier New"><body><table border="0"><tr><td><b>id</b></td><td>&nbsp;&nbsp;&nbsp;</td><td><b>first_name</b></td><td>&nbsp;&nbsp;&nbsp;</td><td><b>last_name</b></td></tr><td colspan="5"><b>-------------------------------------</b></td><tr><td>10</td><td/><td>PENELOPE</td><td/><td>GUINESS</td></tr><tr><td>12</td><td/><td>NICK</td><td/><td>WAHLBERG</td></tr><tr><td>14</td><td/><td>ED</td><td/><td>CHASE</td></tr><tr><td>22</td><td/><td>JENNIFER</td><td/><td>DAVIS</td></tr><tr><td>23</td><td/><td>JOHNNY</td><td/><td>LOLLOBRIGIDA</td></tr><tr><td>27</td><td/><td>BETTE</td><td/><td>NICHOLSON</td></tr><tr><td>34</td><td/><td>GRACE</td><td/><td>MOSTEL</td></tr><tr><td>39</td><td/><td>JOE</td><td/><td>SWANK</td></tr><tr><td>23</td><td/><td>CHRISTIAN</td><td/><td>GABLE</td></tr><tr><td>22</td><td/><td>JENNIFER</td><td/><td>DAVIS</td></tr></table></body></font><p>If you wanted to keep the lowest id, you would just change the <font face="courier New">a.id &lt; a2.id</font> expression to <font face="courier New">a.id &gt; a2.id</font>.</p><font face="courier New"><body><table border="0"><tr><td><b>id</b></td><td>&nbsp;&nbsp;&nbsp;</td><td><b>first_name</b></td><td>&nbsp;&nbsp;&nbsp;</td><td><b>last_name</b></td></tr><td colspan="5"><b>-------------------------------------</b></td><tr><td>10</td><td/><td>PENELOPE</td><td/><td>GUINESS</td></tr><tr><td>14</td><td/><td>ED</td><td/><td>CHASE</td></tr><tr><td>22</td><td/><td>JENNIFER</td><td/><td>DAVIS</td></tr><tr><td>23</td><td/><td>JOHNNY</td><td/><td>LOLLOBRIGIDA</td></tr><tr><td>27</td><td/><td>BETTE</td><td/><td>NICHOLSON</td></tr><tr><td>34</td><td/><td>GRACE</td><td/><td>MOSTEL</td></tr><tr><td>41</td><td/><td>NICK</td><td/><td>WAHLBERG</td></tr><tr><td>39</td><td/><td>JOE</td><td/><td>SWANK</td></tr><tr><td>23</td><td/><td>CHRISTIAN</td><td/><td>GABLE</td></tr><tr><td>22</td><td/><td>JENNIFER</td><td/><td>DAVIS</td></tr></table></body></font><h1 class="blog-sub-title">Deleting Rows with Non-unique Keys</h1><p>In the case of JENNIFER DAVIS, who appears twice with the same id of 22, we would need to employ a different approach because running the above statement with <font face="courier New">a.id = a2.id</font> will target every row in the table! The reason is that we are essentially matching every row against itself!  In the next blog, well learn how to delete rows with non-unique keys such as these.</p>]]></description>
</item>
<item>
<title>Automate Database Replication with Navicat Premium 12</title>
<link>https://www.navicat.com/company/aboutus/blog/674-automate-database-replication-with-navicat-premium-12.html</link>
<description><![CDATA[<b>January 9, 2018</b> by Robert Gravelle<br/><br/><p>Unlike synchronization, which is a one-time process that brings the schema and data of two databases in sync, replication is a process that continuously (automatically) reproduces data between two databases (although schema updates are also possible).  Replication may either be done asynchronously, so that a permanent connection between the two databases is not required, or during non-peak hours, when there is little traffic on the database server, for instance, during the late-night hours.</p><p>The main role of replication is to create an amalgamated repository of all user databases and/or disseminate the same level of information amongst all users. In either case the result is a distributed database in which users can access data relevant to their tasks without interfering with the work of others. The implementation of database replication for the purpose of eliminating data ambiguity or inconsistency among users is known as normalization.</p><p>In the Database Synchronization Strategies whitepaper, we explored some strategies for synchronizing two databases that are of the same and of dissimilar type, using the Navicat Premium Database Management System.  In todays follow-up, well cover how to automate database replication using Navicat Premiums new Automation utility.</p><h1 class="blog-sub-title">Replication Types</h1><p>Database replication can be done in at least three different ways:</p><ul style="list-style-type: disc;"><li>Snapshot replication: Data on one server is simply copied to another database on the same or on a different server.</li><li>Merging replication: Data from two or more databases is combined into a single database.</li><li>Transactional replication: Users receive full initial copies of the database and then receive periodic updates as data changes.</li></ul><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/replication.jpg" style="max-width: 100%;"></td></tr><p>In a distributed database management system (DDBMS) changes, additions, and deletions performed on the data at one location are automatically reflected in the data stored at all the other locations. Doing so ensures that every user accesses the same data set as all the other users.</p><p>Like synchronization, replication can be either Homogenous or Heterogeneous:</p><ul style="list-style-type: disc"><li>Homogenous: Same source and target DBs, i.e. Percona to Percona, MariaDB to MariaDB, MySQL to MySQL.</li><li>Heterogeneous: Dissimilar source and target DBs, i.e. Oracle to Microsoft SQL Server, PostgresSQL, to Amazon DynamoDB, MySQL to Amazon Aurora</li></ul><p>A scenario where Heterogeneous replication would be required would be where one or more external business partners employ a different database type than our own. Automated regular data replication between the two environments is often an integral part of such an arrangement.</p><h1 class="blog-sub-title">Navicat Premiums Automation Utility</h1><p>Introduced in version 12, Navicat Premiums new Automation utility features an easy-to-use and intuitive interface for creating automated batch jobs. Automation is the execution of a process at one or more regular intervals, beginning and ending at a specific date and time, much like Windows Task Scheduler.  In addition to replication, it can be utilized for a variety of jobs, including backups, queries, and reports.</p><p style="font-size: 12px"><i>Figure 1: Navicat Premium 12 Automation utility in Windows</i></p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/navicat automation utility in windows.png" style="max-width: 100%;"></td></tr><p></p><p style="font-size: 12px"><i>Figure 2: Navicat Premium 12 Automation utility in macOS</i></p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/navicat automation utility in macOS.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">The User Database</h1><p>Well be using the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="blank">Sakila Sample MySQL Database</a> as our user database.  It was developed by Mike Hillyer, a former member of the MySQL AB documentation team, and was created specifically for the purpose of providing a standard schema for use in books, tutorials, articles, and the like.</p><p>Its themed around the film industry and covers everything from actors and film studios to video rental stores. The full schema structure can be viewed on the <a class="default-links" href="http://dev.mysql.com/doc/sakila/en/sakila-structure.html" target="blank">MySQL Dev site</a>, if youre interested.</p><p>For instructions on setting up the Sakila database using Navicat, see the <a class="default-links" href="http://www.databasejournal.com/features/mysql/generating-reports-on-mysql-data.html" target="blank">Generating Reports on MySQL Data</a> article on databasejournal.com.</p><p style="font-size: 12px"><br>Sakila MySQL database structure in Navicat Premium 12</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/Sakila MySQL database structure.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Snapshot replication </h1><p>As described above, Snapshot Replication puts two databases in sync by copying data from one database to another on the same or on a different server.  It is the simplest of the three types.</p><h1 class="blog-sub-title">Creating a Data Synchronization Job</h1><p>A data synchronization profile must first be created in order to automate it as a replication process.  The steps to achieve both Homogenous and Heterogeneous synchronization in Navicat Premium 12 were described in the Database Synchronization Strategies whitepaper.  For the purposes of this tutorial, well use the first example on Homogenous synchronization between the sakila and sakila2 databases.</p><p><i>Hint: Once youve created the sakila database, you can create the sakila2 database by right-clicking the connection in the Navigation pane and choosing New Database.  Then enter the database name (sakila2) in the pop-up window.</i></p><p>To open the Data Synchronization wizard:</p><ul style="list-style-type: decimal;"><li>Select <b>Tools -&gt; Data Synchronization</b> from the menu bar.</li><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/Data Synchronization1.png" style="max-width: 100%"></td></tr>  <p></p><li>The Data Synchronization Options tab contains only a few Compare Options checkboxes. We can leave them as is:</li><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/Data Synchronization2.png" style="max-width: 100%"></td></tr>  <p></p><li>The next step of the Data Synchronization wizard is for mapping tables. Target tables may be selected via a dropdown list. In this case, we dont need to provide any mapping instructions as the tables in both databases are identical:</li><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/Data Synchronization3.png" style="max-width: 100%"></td></tr>  <p></p><li><p>After comparing data, the window shows the number of records that will be inserted, updated or deleted in the target tables. You can uncheck the <b>Show identical table and others</b> option if you dont want to include tables with identical data or tables with different structures  in other words, tables that wont be updated.  There are also checkboxes to deselect the tables or the actions you do not wish to apply to the target.</p><p>Selecting a table in the list displays the source and target tables data in the bottom pane. Values that differ between source and target are highlighted. As in the top pane, you can uncheck the records that you do not want to apply to the target.</p></li><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/Data Synchronization4.png" style="max-width: 100%"></td></tr></ul><p><b><i>Difference Options</i></b></p><p>The kind of differences to show may be selected from a dropdown list. Here are the possible options:</p><ul style="list-style-type: disc;"><li>Difference: Show all records that are different in source and target tables.</li><li>Insert: Only show the records that do not exist in the target table.</li><li>Update: Only show the records that exist in both source and target tables having different values.</li><li>Delete: Only show the records that do not exist in the source table.</li><li>Same: Show the records that exist in both source and target tables having identical values.</li><li>All Rows: Show all records in source and target tables.</li></ul><p>In our case, selecting Update or Same would show zero rows because there are no rows to update (only insert) and none the same:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/option.png" style="max-width: 100%"></td></tr><p>As before, clicking the <b>Deploy</b> button generates and displays the Deployment Script:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/Data Synchronization5.png" style="max-width: 100%"></td></tr><p>This screen, like all the previous ones, contains a <b>Save Profile</b> button that allows you to save your settings for future use. This particular screen also has a button for saving the <b>Deployment Script</b>.</p><p>You may still <b>Recompare</b> the two databases, or proceed to <b>Execute</b> the deployment script.  There is a checkbox to <b>Continue on error</b> so that deployment does not halt upon encountering an error.</p><p>As the script executes, you may view its progress in the Message Log. It displays both the number of records processed and completed percentage:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/Data Synchronization6.png" style="max-width: 100%"></td></tr><p>After closing the dialog, we can confirm that the <i>sakila2</i> database tables now contain data:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/sakila2 database tables.png" style="max-width: 100%"></td></tr><br><b>Dont forget to save the profile because the batch job will be utilizing it.</b><p><b><i>Creating a Batch Job</i></b></p><p>We will now employ Navicats Automation tool to setup a recurring replication between the sakila and sakila2 databases.</p><ul style="list-style-type: decimal;"><li>To begin, click the <b>Automation</b> button in the main toolbar.</li>  <p></p><li>Then click on <b>New Batch Job</b> in the Objects toolbar to open a New Batch Job tab.</li>  <p></p><li>Browse the source connection, database and/or schema on the in the Objects pane. That will make saved jobs for that database appear in the Available Jobs bottom pane.<br>In the Available Jobs pane, select the <b>Data Synchronization</b> job type, and then move the job from the Available Jobs list to the Selected Jobs list above by double-clicking or dragging it. (You can delete the jobs from the Selected Jobs list in the same way.)</li><tr><td align="bottom"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/homogeneous job selected.jpg" style="max-width: 100%"></td></tr><li>Click the <b>Save</b> button on the Automation toolbar and provide a descriptive name in the Save dialog.</li></ul><p>That will enable <b>the Set Task Schedule</b> and <b>Delete Task Schedule</b> buttons.</p><p><b><i>The General tab</i></b></p><p>In the General tab of the Task Schedule dialog, you may provide a description for the task as well as provide several options for its execution.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/homogeneous job general tab.jpg" style="max-width: 100%"></td></tr><ul style="list-style-type: decimal;"><li>Within the Security Options frame, you may configure which user or group account to run the task under. There is also an option to run the task whether the user is logged on or not. If you do choose that option, you'll have provide your OS user password in Windows Scheduler when you save the schedule.</li>  <p></p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/saving user credentials.jpg" style="max-width: 100%"></td></tr><p>You may also choose to run the task as Hidden as well as configure it to run on a specific operating system.</p></ul><p><b><i>Triggering the Task</i></b></p><p>The Triggers tab lists the task's schedule. Tasks may be configured to run on a variety of schedules, including One Time, Daily, Weekly, Monthly, and according to just about any permutation of each.</p><p>Click the <b>New...</b> button to bring up the New Trigger dialog:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/new trigger dialog.jpg" style="max-width: 100%"></td></tr><p>The same task may run according to numerous schedules.  For instance, we could schedule our database synchronization task to run every first of the month as well as on every second Sunday:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/triggers tab.jpg" style="max-width: 100%"></td></tr><p><b><i>Set Email Notification</i></b></p><p>Navicat allows you to generate and send personalized emails with results returned from a schedule. The results can be emailed to multiple recipients. Check the <b>Send Email</b> option in the Advanced tab and enter the required information.</p><p><b>From</b></p><p>Specify the email address of sender. For example, someone@navicat.com.</p><p><b>To, CC</b></p><p>Specify the email address of each recipient, separating them with a comma or a semicolon (;).</p><p><b>Subject</b></p><p>Specify the email subject with customized format.</p><p><b>Body</b></p><p>Write email content.</p><p><b>Host (SMTP Server)</b></p><p>Enter your Simple Mail Transfer Protocol (SMTP) server for outgoing messages.</p><p><b>Port</b></p><p>Enter the port number you connect to your outgoing email (SMTP) server.</p><p><b>Use authentication</b></p><p>Check this option and enter User Name and Password if your SMTP server requires authorization to send emails.</p><p><b>Secure connection</b></p><p>Specify the connection to use <i>TLS, SSL secure connection</i> or <i>Never</i>.</p><p><b>Send Test Mail</b></p><p>Navicat will send you a test mail indicating success or failure.</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/advanced tab.jpg" style="max-width: 100%"></td></tr><p>Once you've finished configuring your automated job, you can test it by clicking the Start button in the Automation toolbar.</p><h1 class="blog-sub-title">Merging Replication</h1><p>As the name suggests, Merging Replication consists of combining data from two or more databases into a single database. As an exercise, we will merge the contents of the sakila and sakla2 databases into a third database named sakila_merged that will store the merged dataset.</p><h1 class="blog-sub-title">The Required Data Synchronization Jobs</h1><p>The Merging Replication job will require us to create and save two Data Synchronization profiles: one for each source database. The steps will be exactly the same as in the Creating a Data Synchronization Job section above, so we wont reiterate them here.</p><h1 class="blog-sub-title">Creating the Batch Job</h1><p>Batch jobs may be triggered by the source databases or by the target, as we did in the previous section on Snapshot Replication. However, it is usually easiest to trigger batch jobs from the target database since they will all reside on the same server. We'll do that here as well.</p><ul style="list-style-type: decimal;"><li>Click the <b>Automation</b> button in the main toolbar.</li>  <p></p><li>Then click on <b>New Batch Job</b> in the Objects toolbar to open a new batch job tab.</li>  <p></p><li>Browse the source connection, database and/or schema on the in the Objects pane. That will make saved jobs for that database appear in the Available Jobs bottom pane.</li>  <p></p><li>In the Available Jobs pane, select the <b>Data Synchronization</b> job type, and then move the job from the Available Jobs list to the Selected Jobs list above by double-clicking or dragging it.</li>  <p></p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/merging replication jobs selected.jpg" style="max-width: 100%"></td></tr>  <p></p><li>Click the <b>Save</b> button on the Automation toolbar and provide a descriptive name in the Save dialog.  That will enable <b>the Set Task Schedule</b> and <b>Delete Task Schedule</b> buttons.</li>  <p></p><li>In the General tab of the Task Schedule dialog, you may again provide a description for the task as well as provide several options for its execution.</li>  <p></p><li>Within the Triggers tab, Tasks may be configured to run on a variety of schedules, including One Time, Daily, Weekly, Monthly, and within just about any permutation of each. This time, two jobs will execute rather than one.</li>  <p></p><li>Once you've finished configuring your automated job, you can test it by clicking the <b>Start</b> button in the Automation toolbar.</li></ul><h1 class="blog-sub-title">Transactional replication</h1><p>In Merged Replication, only the merged database contains all of the latest data. Each source database contains only the baseline data, plus whatever was inserted since it was first populated.  In Transactional Replication, users receive full initial copies of the database and then receive periodic updates as data changes so that all databases are working with the same dataset. Keeping multiple databases in synch makes this the most complex replication type.</p><h1 class="blog-sub-title">The Required Data Synchronization Jobs</h1><p>With Transactional replication, the number of required Data Synchronization Jobs increases substantially because data must be replicated across all of the user databases. For example, say that we had three databases called sakila, sakila2, and sakila3. We could merge and propagate the full dataset across all of the user databases using a total of six Data Synchronization Jobs: three to merge the user databases, and another three to update them with the merged dataset.</p><p>Here is the Automation wizard with all six jobs:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/Automation wizard.png" style="max-width: 100%"></td></tr><p>To allow sufficient time for the data merging to complete, it is best to split the jobs into two parts where the first merges the data and the second updates the user databases with the full dataset after a specified delay.</p><p>Here is what the Automation Job that propagates the merged dataset to the user databases might look like:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/transactional replication update jobs selected.jpg" style="max-width: 100%"></td></tr><p>Running this job two hours after the first should provide plenty of time for the merging to complete.  Hence, if the first job was scheduled to run at midnight, we would set this job to start at 2 Am:</p><tr><td align="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180109/transactional replication update schedule.jpg" style="max-width: 100%"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>In this follow-up to the Database Synchronization Strategies whitepaper, we covered how to automate database replication using Navicat Premium 12s Automation utility. Used in conjunction with its Synchronization tool, it allows DBAs to automate various types of replication to run on a predefined schedule.</p><p>For more information about Navicat Premium 12, visit the <a class="default-links" href="https://www.navicat.com/en/products/navicat-premium" target="blank">product page</a>.</p>]]></description>
</item>
<item>
<title>Design SELECT Queries using Navicat's Query Builder (Part 1)</title>
<link>https://www.navicat.com/company/aboutus/blog/673-design-select-queries-using-navicat-s-query-builder.html</link>
<description><![CDATA[<b>January 3, 2018</b> by Robert Gravelle<br/><br/><p>Available in Non-Essentials editions of Navicat for MySQL, PostgreSQL, SQLite, MariaDB, and Navicat Premium, the Query Builder allows anyone to create and edit queries with only a cursory knowledge of SQL. In today's blog, we'll use it to write a query to fetch a list of actors that appeared in movies released during a given year.</p><h1 class="blog-sub-title">The Source Database</h1><p>The query that we'll be building will run against the <a class="default-links" href="https://dev.mysql.com/doc/sakila/en/" target="blank">Sakila sample database</a>. A former member of the MySQL AB documentation team named Mike Hillyer created the Sakila database specifically for the purpose of providing a standard schema for use in books, tutorials, and articles just like the one you're reading.</p><p>The database contains a number of tables themed around the film industry that cover everything from actors and film studios to video rental stores. Please refer to the <a class="default-links" href="http://www.databasejournal.com/features/mysql/generating-reports-on-mysql-data.html" target="_blank">Generating Reports on MySQL Data</a> tutorial for instructions on downloading and installing the Sakila database.</p><h1 class="blog-sub-title">Opening the Query Builder</h1><p>You can think of the Query Builder as a tool for building queries visually. It's accessible from the Query Designer screen. Let's bring it up by opening a new query:</p><ul style="list-style-type:decimal;" class="blog-list"><li>Click the Query icon on the main toolbar, followed by the New Query button from the Object toolbar:</li><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180103/new query.jpg" style="max-width: 100%;"></td></tr><li>In the Query Designer, click the Query Builder button to open the visual SQL Builder.<p>The database objects are displayed in left pane, whereas the right pane is divided into two portions: the upper Diagram Design pane, and the lower Syntax pane:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180103/empty query builder.jpg" style="max-width: 100%;"></td></tr></li></ul><h1 class="blog-sub-title">Constructing the Actors for Year's Films Query</h1><p>It's a good idea to select the tables first, so that the Query Builder knows which fields to present for the field list:</p><ul style="list-style-type: decimal;" class="blog-list"><li>Drag a table/view from the left pane to the Diagram Design pane or double-click it to add it to query. We'll be needing the actor, film_actor, and film tables.</li><li>You can assign table aliases by clicking on "&lt;alias&gt;" beside each table. To add the table alias, simply double-click the table name and enter the alias in the Diagram Design pane.</li><p>Note how the Query Builder already knows the table relationships. That's because foreign key constraints have already been declared on Table objects:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180103/query builder with tables.jpg" style="max-width: 100%;"></td></tr><li>To include a field in the query, check the left of the field name in the Diagram Design pane. To include all the fields, click at the left of the object caption. Select the actor first and last names as well as the film title.</li></ul><h3 style="font-size: 18px;">Adding WHERE Criteria</h3><p>Clicking on "&lt;Click here to add conditions&gt;" beside the WHERE keyword adds a default WHERE condition of "&lt;--&gt; = &lt;--&gt;".</p><ul style="list-style-type: decimal;" class="blog-list"><li>Click on the left-hand "&lt;--&gt; = &lt;--&gt;" to select a field. That opens a popup dialog that contains a List of fields as well as an Edit tab.</li><li>Click the List tab and choose the f.release_year field.</li><li>Click OK to close the dialog.</li><li>Next, click on the right-hand "&lt;--&gt; = &lt;--&gt;" to set the release year. This time enter a value of "2006" in the Edit tab. Click OK to close the dialog.</li><li>Click OK to close the Query Builder. You should now see the generated SELECT statement in the Query Editor:</li><pre>SELECTa.first_name,a.last_name,f.titleFROMactor AS aINNER JOIN film_actor AS fa ON fa.actor_id = a.actor_idINNER JOIN film AS f ON fa.film_id = f.film_idWHEREf.release_year = 2006</pre><li>Click the Run button to execute the query. The results will be sorted by Film title:</li><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/2018/20180103/query result.jpg" style="max-width: 100%;"></td></tr></ul><h1 class="blog-sub-title">Conclusion</h1><p>Whether you're a novice or experience DBA, Navicat's Query Builder makes writing SELECT queries easier than ever before. In an upcoming blog, we'll get into some of its more advanced features.</p>]]></description>
</item>
<item>
<title>How to Identify Duplicates with Non-unique Keys (Part 2)</title>
<link>https://www.navicat.com/company/aboutus/blog/672-how-to-identify-duplicates-with-non-unique-keys-part-2.html</link>
<description><![CDATA[<b>December 27, 2017</b> by Robert Gravelle<br/><br/><p>The majority of duplicate records fall into one of two categories: Duplicate Meaning and Non-unique Keys.  The How to Spot and Delete Values with Duplicate Meaning in MySQL blog dealt with Duplicate Meaning; in today's follow-up, we'll address how to identify Non-unique Keys.  That's where two records in the same table have the same key, but may or may not have different values and meanings.</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171227/duplicates2.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">How Does this Happen?</h1><p>Even a well-designed database can accumulate non-unique key duplicates.  It often happens as a result of data that is imported from external sources such as text, csv, or excel files as well as data feeds.  Even merging data from two different databases might create duplicate keys if you are combining each in some way to generate a new key  assuming of course that the new key column supports non-unique values.  For example, concatenating two numbers to generate a new key could prove problematic: </p><p></p><font face="courier New"><body><table border="0"><tr><td>Key 1</td><td>&nbsp;&nbsp;&nbsp;</td><td>Key 2</td><td>&nbsp;&nbsp;&nbsp;</td><td>New Key</td></tr><tr><td colspan="5">--------------------------</td></tr><tr><td>10</td><td></td><td>25</td><td></td><td>1025</td></tr><tr><td>102</td><td></td><td>5</td><td></td><td>1025 !!!</td></tr></table></body></font><p></p><h1 class="blog-sub-title">An Example Table</h1><p>In databases that support complex systems, it isn't always feasible to prevent duplicate keys from occurring. What's important is being able to deal with them quickly and effectively before they taint your data.</p><p>Let's begin by separating the true duplicate values from overlapping keys.</p><p>Here's the product of amalgamating two data sources of actors.  You'll notice that there a couple of duplicated names, specifically JENNIFER DAVIS and NICK WAHLBERG:</p><p></p><font face="courier New"><body><table border="0"><tr><td><b>id</b></td><td>&nbsp;&nbsp;&nbsp;&nbsp;</td><td><b>first_name</b></td><td>&nbsp;&nbsp;&nbsp;&nbsp;</td><td><b>last_name</b></td></tr><tr><td colspan="5">--------------------------------------</td></tr><tr><td>10</td><td></td><td>PENELOPE</td><td></td><td>GUINESS</td></tr><tr><td>12</td><td></td><td>NICK</td><td></td><td>WAHLBERG</td></tr><tr><td>14</td><td></td><td>ED</td><td></td><td>CHASE</td></tr><tr><td>22</td><td></td><td>JENNIFER</td><td></td><td>DAVIS</td></tr><tr><td>23</td><td></td><td>JOHNNY</td><td></td><td>LOLLOBRIGIDA</td></tr><tr><td>27</td><td></td><td>BETTE</td><td></td><td>NICHOLSON</td></tr><tr><td>34</td><td></td><td>GRACE</td><td></td><td>MOSTEL</td></tr><tr><td>41</td><td></td><td>NICK</td><td></td><td>WAHLBERG</td></tr><tr><td>39</td><td></td><td>JOE</td><td></td><td>SWANK</td></tr><tr><td>23</td><td></td><td>CHRISTIAN</td><td></td><td>GABLE</td></tr><tr><td>22</td><td></td><td>JENNIFER</td><td></td><td>DAVIS</td></tr></table></body></font><p></p><p>Nick Walberg would be an instance of Duplicate Meaning, which we explored in the last blog.  JENNIFER DAVIS, on the other hand, appears in two records with the same key of 22.  There is also a duplicated key that is associated with two unrelated actors: #23 for JOHNNY LOLLOBRIGIDA and CHRISTIAN GABLE.  With regards to the duplicated keys of 22 and 23, the first is a true duplicate, whereas the second only needs a new key to be generated for one of the records.</p><h1 class="blog-sub-title">Identifying and Counting Duplicates</h1><p>The following query will identify all of the records of the above table that share a common id.  I recommend using the MySQL group_concat() function to format duplicated rows together on one line:</p><p></p><font face="Courier New"><body><table border="0"><tr><td>SELECT</td></tr><tr><td>&nbsp;&nbsp;COUNT(*) as repetitions,</td></tr><tr><td>&nbsp;&nbsp;group_concat(id, ' (', last_name, ', ', first_name, ') '  SEPARATOR ' | ')</td></tr><tr><td>&nbsp;&nbsp;&nbsp;&nbsp;as row_data</td></tr><tr><td>FROM amalgamated_actors</td></tr><tr><td>GROUP BY id</td></tr><tr><td>HAVING repetitions > 1;</td></tr></table></body></font><p></p><font face="courier New"><body><table border="0"><tr><td><b>Repetitions</b></td><td>&nbsp;&nbsp;&nbsp;&nbsp;</td><td><b>row_data</b></td></tr><tr><td colspan="3">-------------------------------------------------------------</td></tr><tr><td>2</td><td></td><td>22 (DAVIS, JENNIFER) | 22 (DAVIS, JENNIFER)</td></tr><tr><td>2</td><td></td><td>23 (LOLLOBRIGIDA, JOHNNY) | 23 (GABLE, CHRISTIAN)</td></tr></table></body></font><p></p><p>If you ever wanted to find all duplicates - that is Duplicate Meaning and Non-unique Key duplicates - at the same time, you can combine the above query with one that checks for duplicated names using the UNION operator: </p><font face="courier New"><table><tr><td>SELECT</td></tr><tr><td>&nbsp;&nbsp;COUNT(*) as repetitions,</td></tr><tr><td>&nbsp;&nbsp;group_concat(id, ' (', last_name, ', ', first_name, ') '  SEPARATOR ' | ')</td></tr><tr><td>&nbsp;&nbsp;&nbsp;&nbsp;as row_data</td></tr><tr><td>FROM amalgamated_actors</td></tr><tr><td>GROUP BY id</td></tr><tr><td>HAVING repetitions > 1</td></tr><tr><td>UNION</td></tr><tr><td>SELECT</td></tr><tr><td>&nbsp;&nbsp;COUNT(*) as repetitions,</td></tr><tr><td>&nbsp;&nbsp;group_concat(id, ' (', last_name, ', ', first_name, ') '  SEPARATOR ' | ')</td></tr><tr><td>&nbsp;&nbsp;&nbsp;&nbsp;as row_data</td></tr><tr><td>FROM amalgamated_actors</td></tr><tr><td>GROUP BY last_name, first_name</td></tr><tr><td>HAVING repetitions > 1;</td></tr></table></font><p></p><p>That highlights all the duplicates in one result set:</p><p></p><font face="courier New"><body><table border="0"><tr><td><b>Repetitions</b></td><td>&nbsp;&nbsp;&nbsp;&nbsp;</td><td><b>row_data</b></td></tr><tr><td colspan="3">-------------------------------------------------------------</td></tr><tr><td>2</td><td></td><td>22 (DAVIS, JENNIFER) | 22 (DAVIS, JENNIFER)</td></tr><tr><td>2</td><td></td><td>23 (LOLLOBRIGIDA, JOHNNY) | 23 (GABLE, CHRISTIAN)</td></tr><tr><td>2</td><td></td><td>41 (WAHLBERG, NICK) | 12 (WAHLBERG, NICK)</td></tr></table></body></font><h1 class="blog-sub-title">Conclusion</h1><p>Crafting a query to identify duplicate keys in MySQL is relatively simple because you only need to group on the key field and include the <i>Having COUNT(*) > 1</i> clause.  In a future article, we'll review some different approaches for deleting duplicate rows and updating keys.</p>]]></description>
</item>
<item>
<title>How to Spot and Delete Values with Duplicate Meaning in MySQL (Part 1)</title>
<link>https://www.navicat.com/company/aboutus/blog/671-how-to-spot-and-delete-values-with-duplicate-meaning-in-mysql-part-1.html</link>
<description><![CDATA[<b>December 21, 2017</b> by Robert Gravelle<br/><br/><p>One of the DBA's biggest annoyances is dealing with duplicate data.  No matter how much we try to guard against it, duplicates always mange to find their way into our tables.  Duplicate data is a big problem because it can affect application views (where each item is supposed to be unique), skew statistics, and, in severe cases, increase server overhead.</p><p>In this tip, we'll learn how to recognize duplicate data in MySQL, as well as how to delete them without removing precious valid data.</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171221/duplicates.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Duplicate Types</h1><p>Most of the duplicate records that you'll encounter are one of two distinct types: Duplicate Meaning and Non-unique Keys.  In this instalment we'll be dealing with Duplicate Meaning; we'll address Non-unique Keys in the next one.</p><h1 class="blog-sub-title">When a Duplicate is not a Duplicate</h1><p>Duplicate Meaning is the most common type of duplicate.  It's a situation where two or more fields' contents are not the same, but their meaning is.  You could think of it as a semantic duplicate.</p><p>Consider the following table excerpt:</p><font face="courier New"><body><table border="0"><tr><td><b>movie_name</b></td><td><b>media</b></td></tr><tr><td colspan="2">---------------------------</td></tr><tr><td>ACADEMY DINOSAUR</td><td>Theatre</td></tr><tr><td>ACE GOLDFINGER</td><td>Television</td></tr><tr><td>ADAPTATION HOLES</td><td>Theatre</td></tr><tr><td>AFFAIR PREJUDICE</td><td>Theatre</td></tr><tr><td>AFRICAN EGG</td><td>TV</td></tr></table></body></font><p>In the media column, the entries "Television" and "TV" have the same connotation, but expressed differently.  This issue is often caused by the use of free-text input where a limited dropdown would have been a better choice.</p><p>This type of duplication can be very challenging to deal with because you can't exclude duplicates using a SELECT DISTINCT.</p><p>There are two ways to deal with this problem:</p><ul style="list-style-type:decimal;" class="blog-list"><li>Select data using REPLACE() to swap out values that we don't want with those that we want to see instead:</li><p></p>    <font face="courier New"><body><table border="0"><tr><td>SELECT DISTINCT</td><td>movie_name,</td></tr><tr><td></td><td>REPLACE(media, "TV", "TELEVISION") as media,</td></tr><tr><td>FROM   films;</td><td></td></tr></table></body></font><p></p><li>Update the actual table data.  Here's a statement that updates all instances of TV with the preferred TELEVISION value:</li><p></p><font face="courier New"><body><table border="0"><tr><td>UPDATE films</td></tr><tr><td>SET media = REPLACE(media, "TV", "TELEVISION")</td></tr><tr><td>WHERE media = "TV";</td></tr></table></body></font></ul><p></p><p>Here's a real-life example that I came across only a month ago!</p><p>Somehow, some unwanted curly apostrophes found their way into our data.  Notice the O'BRIEN and O'BRIEN entries:</p><font face="courier New"><body><table border="0"><tr><td><b>first_name</b></td><td><b>last_name</b></td></tr><tr><td colspan="2">---------------------</td></tr><tr><td>PENELOPE</td><td>GUINESS</td></tr><tr><td>CONAN</td><td>O'BRIEN</td></tr><tr><td>ED</td><td>CHASE</td></tr><tr><td>JENNIFER</td><td>DAVIS</td></tr><tr><td>CONAN</td><td>O'BRIEN</td></tr></table></body></font><p>We can deal with this problem in the same way we did above:</p><ul style=list-style-type:decimal;" class="blog-list"><li>Select data using REPLACE() to swap out curly apostrophes with regular single quotes so that we're always dealing with the same character:</li><p></p><font face="courier New"><body><table border="0"><tr><td>SELECT DISTINCT</td><td>first_name,</td></tr><tr><td></td><td>REPLACE(last_name, "'", "'") as last_name,</td></tr><tr><td>FROM   actors</td><td></td></tr><td colspan="2">WHERE  REPLACE(last_name, "'", "'") like "O'BRIEN";</td></table></body></font><p></p><li>Update the actual table data.  This statement updates all apostrophes in the last_name column with regular single quotes:</li><p></p><font face="courier New"><body><table border="0"><tr><td>UPDATE actors</td></tr><tr><td>SET last_name = REPLACE(last_name, "'", "'")</td></tr><tr><td>WHERE last_name like "%'%";</td></tr></table></body></font><p></p></ul><h1 class="blog-sub-title">Conclusion</h1><p>Duplicate records, doubles, redundant data, duplicate rows; whatever you want to call them, they are one of the biggest banes in a DBA's life.  Nevertheless, it's crucial that you weed them out on a regular basis, lest you want to generate faulty statistics and confuse your users who interact with the database.</p>]]></description>
</item>
<item>
<title>Create a Model from a Database in Navicat</title>
<link>https://www.navicat.com/company/aboutus/blog/669-create-a-model-from-a-database-in-navicat.html</link>
<description><![CDATA[<b>December 13, 2017</b> by Robert Gravelle<br/><br/><p>A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized and manipulated. There are many kinds of data models, but the most popular type is the relational model, which uses a table-based format.</p><p>Usually, the data warehousing staff of a business will design one or more types of data models in order to most effectively normalize the tables and pan how to most efficiently store and retrieve business data.  Another advantage of doing this exercise upfront is that many professional tools like Navicat can utilize the models as plans and build the database according to their specifications.</p><p>That being said, it is an unfortunate fact that all-too-often, data models get misplaced or deleted over time.  In that event, DBAs have no recourse but to either redraft the models from scratch or, if they're in the know, let their Database Management Tool create models for them based on the existing database.</p><p>In today's tip, we'll learn how to create a model from a variety of database objects in Navicat Premium.</p><h1 class="blog-sub-title">Launching the Wizard</h1><p>The process of extracting design information from a software product is known as reverse engineering.  In Navicat, you can reverse engineer a database/schema, tables or views to a physical model.</p><p>To reverse engineer a database schema, right-click it in the Navigation Pane and choose <b>Reverse Schema to Model</b> from the popup menu:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/Reverse Schema to Model menu item.jpg" style="max-width: 100%;"></td></tr><p>Navicat will then generate a physical model from the selected schema and open it in a new Model window:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/untitled model of sakila db.jpg" style="max-width: 100%;"></td></tr><p>You can then work with the new model just as you would one that you created from scratch.  For example, you can add relationships, move objects around, and save the model.</p><h1 class="blog-sub-title">Reversing Tables to Model</h1><p>Individual tables or views may be reverse engineered into physical models as well by right-clicking them in the Navigation Pane and selecting <b>Reverse Tables to Model</b> from the popup list:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/reverse tables to model menu item.jpg" style="max-width: 100%;"></td></tr><p>That will open the selected table in a new Model window:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/actor table model.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Selecting Multiple Tables/Views</h1><p>It is also possible to select more than one table or view by selecting them in the Objects pane:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/selecting multiple tables.jpg" style="max-width: 100%;"></td></tr><p>Right-clicking anywhere within the selection and choosing <b>Reverse Tables to Model</b> from the popup list will now open those tables/views in a new Model window:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/multiple tables in model window.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Importing Databases, Schema, Tables or Views from the Model Window</h1><p>Navicat also supports the importing of databases, schema, tables or views from the Model window. A step-by-step wizard is provided to guide you through the import process.</p><ul style="list-style-type:decimal;" class="blog-list"><li>Begin by opening a new Model window, either by:</li>  <p/><ul style="list-style-type:lower-alpha;" class="blog-list"><li>Clicking the <b>Model</b> button on the main toolbar followed by the <b>New Model</b> button on the Objects toolbar:</li><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/new model button.jpg" style="max-width: 100%;"></td></tr><p>OR</p><li>Selecting <b>File > New > Model</b> from the main menu:</li><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/new model menu item.jpg" style="max-width: 100%;"></td></tr></ul>  <p/><li>Enter the Database Vendor and Version number in the New Model dialog and click <b>OK</b> to open a new Model window for that product:</li><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/New Model dialog.jpg" style="max-width: 100%;"></td></tr>  <p/><li>Select <b>File -> Import from Database</b> from the Model window menu:</li><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/Import from Database menu item.jpg" style="max-width: 100%;"></td></tr>  <p/><li>On the Import from Database dialog, select a Connection.</li>  <p/><li>Choose the databases, schemas, tables or views you want to import:</li><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171213/Import from Database dialog.jpg" style="max-width: 100%;"></td></tr>  <p/><li>Click <b>Start</b> to create the model from the selected objects.</li></ul><h1 class="blog-sub-title">Conclusion</h1><p>Should the need ever arise to reverse engineer database objects into a model, Navicat has you covered.  Available in Navicat Premium and Enterprise Editions, the Reverse Engineering feature takes the challenge out of physical model creation from databases, schema, tables or views.</p>]]></description>
</item>
<item>
<title>Performing Database-wide Searches in Navicat</title>
<link>https://www.navicat.com/company/aboutus/blog/668-performing-database-wide-searches-in-navicat.html</link>
<description><![CDATA[<b>December 6, 2017</b> by Robert Gravelle<br/><br/><p>Whether your database of choice is app like MySQL, MariaDB, SQL Server, Oracle, and PostgreSQL, or a cloud-based service such as Amazon RDS, Amazon Aurora, Amazon Redshift, SQL Azure, Oracle Cloud and Google Cloud, you'll inevitably be looking for a piece of data whose location eludes you.  For those occasions, you'll be happy that you use one of Navicat's award winning database administration products.</p><p>Available in all editions, with the except of Navicat Essentials, the Find In Database/Schema tool allows you to search tables, views and even object structures within a database and/or schema.</p><p>You'll find it under the Tools item in the main menu:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171206/find_in_db_menu_item.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Searching for Data</h1><p>Suppose that we were looking for a record associated with the word jungle.  You could bushwhack your way through each and every table or simply enter the search term in the Find in Database screen.  There are four Search Modes to choose from: Contains, Whole Word, Prefix and even using powerful Regular Expression pattern matching.</p><ul style="list-style-type:disc;" class="blog-list"><li>Contains will match your search term against any part of a text value.</li><li>Whole Word will only match if the text value is exactly the same as the search term.</li><li>Prefix matches the start of a text value.</li><li>Regular Expression apply pattern matching to text values.</li></ul><p>Matching is performed on a case insensitive basis unless you uncheck the Case Insensitive box.</p><p>The results of your search are displayed in the Find results pane.  The table/view Name is displayed, along with the Number of Matched Records.  I got two matches for jungle:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171206/find_in_db_data_results.jpg" style="max-width: 100%;"></td></tr><p>To take a better look at the matched rows, just double-click the item in the Find results pane.  That will open a new Query Editor with only the row that contains the match:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171206/data_search_result_in_query_editor.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Searching for Object Structures</h1><p>An Object Structure search looks for matches against database object names.  These would include Tables, Views, Functions, Queries, Indexes, Triggers, Events and/or Materialized Views.</p><p>The Search Modes include the same four as in data searches and can either be case sensitive or insensitive.</p><p>For this search I set the Search Mode to Prefix so that the Find In Database/Schema tool would find object names that begin with my search term.  Not surprisingly, in a movie rental store database, a Prefix of film_ hit a few times!</p><p>Below are the results.  Notice that the object type and match are both included in the Find results pane. The search term text within Matched Content is highlighted in red as well:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171206/find_in_db_object_structure_results.jpg" style="max-width: 100%;"></td></tr><p>This time, double-clicking an item in the Find results pane opens the appropriate editor for that database object.  For example, clicking the last match in the list for the inventory table opens the Table Editor with the matching field selected and highlighted:</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20171206/object_structure_search_result_in_table_editor.jpg" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">Conclusion</h1><p>The Find In Database/Schema feature makes searching for text content within data and object structure names so much easier than the alternative that you'll ask yourself how you every got by without it.  For more information on how to use the tool, there's a <a class="default-links" href=https://youtu.be/4Jf2_vKBDIQ target="_blank" >video</a> about that very subject on YouTube.</p>]]></description>
</item>
<item>
<title>Compare two MySQL databases for any differences</title>
<link>https://www.navicat.com/company/aboutus/blog/666-compare-two-mysql-databases-for-any-differences.html</link>
<description><![CDATA[<b>October 16, 2017</b> by Gavin<br/><br/><p>The utility shows comparison between the data and objects from two databases to see differences. It acknowledges objects having separate definitions in both the databases and presents them in a separate style format of selection. Variations in the data are seen in standard form of CSV, VERTICAL, GRID or TAB.</p><p>Make use of notation db1: db2 and name both the databases to compare or just compare DB1 to compare two databases under similar name.</p><p>The comparison might just run against both databases of varied names on a singular server by suggesting only the server1 alternative. Users can also interlink to another serve by stipulating just the server 1 option. However, they can also link to another server by stipulating the server 2 alternative. Under such a condition, db1 is taken from server 1 while db2 is taken from server 2.</p><p>All the databases between both the servers can also be compared via the all option. Under such a situation, only the databases in common with similar name between the servers are efficiently compared. So, you do not have to be specified but server 1 and server 2 alternatives are needed. You can skip the comparison of some databases via the exclude option.</p><p>Remember the data shouldn't be changed during the comparison. You may see errors occur when data is changed during the comparative study.</p><p>The objects chosen in the database comprise of views, procedures, events, functions, triggers and tables. You can show the count for every object type via vv option.</p><p>The check is performed via tests. By default, when the first test fails, the utility stops, but with the help of run all tests options to run all the tests together irrespective of their final state.</p><h1 class="blog-sub-title">The tests comprise of:</h1><ul style="list-style-type:decimal;" class="blog-list"><li>Evaluate database definitions: ensure both the databases are present.</li><li>Evaluate the existence of objects in the two databases: ensure that both the databases acknowledges the objects.</li><li>Make a comparative study of object definitions: the objects are compared and differences shown.</li><li>Evaluate table row counts: ensure that the two database tables have similar row numbers.</li><li>Evaluate table data uniformity: it ensures both the changed rows and the missing rows from each table in the databases. The step is divided into two stages: firstly the complete table is compared between the table, but if it fails, then the program to search rows differences is implemented.</li></ul><p>You might wish to use  skip  xxx function to just run few tests. It is helpful when you want to synchronize two databases, to ignore running all the tests again and again in the procedure. Every test shows following results:</p><ul style="list-style-type:disc" class="blog-list"><b><li>Pass- the test is successful.</li></b><b><li>Fail- the test failed.</li></b><b><li>Skip- the test skipped because of a missing point.</li></b><b><li>Warn- the test witnessed an unusual error.</li></b><b><li>- -the test isn't right for this object.</li></b></ul><p>With the help of these results, you can find out whether the tests were successful or not and whether you need to conduct them again or not.</p>]]></description>
</item>
<item>
<title>Prepare to Migrate Databases to Amazon Aurora</title>
<link>https://www.navicat.com/company/aboutus/blog/665-prepare-to-migrate-databases-to-amazon-aurora.html</link>
<description><![CDATA[<b>October 3, 2017</b> by Gavin<br/><br/><p>Being a MySQL compatible relational database engine, Amazon Aurora amalgamates safety, availability and speed of top notch commercial databases with the easiness and cost efficiency of open source database. The engine has been tagged at 1/10th of the price of the commercial engines.</p><p>After determining that Aurora is the database to be used for application development, the second stage is to choose on a migration methodology and to generate a database migration program.</p><p>Migration Factors: Source Database</p><p>There are two types of migration factors:</p><ul style="list-style-type:disc" class="blog-list"><li><b>Homogenous migrations</b> - Percona, MariaDB and MySQL to Amazon Aurora</li><li><b>Heterogeneous migrations</b> - Oracle, PostgresSQL, Microsoft SQL Server to Amazon Aurora</li></ul><h1 class="blog-sub-title">Homogeneous Migration</h1><p>For the source database you desire to migrate is amenable to MySQL 5.6 such as Percona or MariaDB, then you have the below stated migration methodologies:</p><p><b>RDS Snapshot migration:</b> for those who have AWS RDS system accessing their MYSQL Data Base server, then they just have to migrate the database snapshot to the AWS Aurora Database. For downtime migrations, you may either have to terminate your application or may just terminate writing to the database whilst migration and snapshot is in evolvement.</p><p><b>Migration with the help of native Navicat Tools:</b> one can make use of <a class="default-links" href="https://www.navicat.com/en/products" target="_blank">native Navicat tools</a> to migrate the plan from your DB server to AWS Aurora DB. With the help of this approach, you get more control over the database migration procedure.</p><p>Migration by utilizing AWS DMS: it is equipment rendered by AWS just to migrate he database scheme to AWS Aurora DB. Before making use of AWS DMS to shift the data, one needs to copy the database plan from the resource to the place where you have targeted via using native Navicat tools.</p><p>Making use of AWS DMS is a reliable idea when you do not have the experience to use native Navicat tools. It is provided with the option of having downtime as well as not having downtime methodology.</p><h1 class="blog-sub-title">Heterogeneous Migrations</h1><p>When the source database you wish to migrate isn't MySQL compliant database such as PostgresSQL, Oracle to AWS Aurora DB, then you have a number of options accessible to fulfill the migration procedure.</p><p><b>Schema Migration:</b> schema migration to Amazon Aurora from a non-MySQL compliant database can be accomplished by making use of AWS Schema Conversion Tool. It is a desktop app which assists you to convert your database plan from PostgreSQL, Microsoft SQL Server as well as Oracle database to an Amazon RDS MySQL DB or Amazon Aurora DB cluster.</p><p><b>Data Migration:</b> whilst assisting homogeneous migrations with zero downtime, AWS DMS (AWS Database Migration Service) assists constant replication over heterogeneous databases and is a priority alternative to move your resource database to your destination database, for migrations with downtime as well as migrations with near-zero downtime.</p><p>With the help of these migrations, you can effectively migrate database to Amazon Aurora. They are the simplest and easiest way to migrate database. So, choose any according to your compliance and then go for it.</p>]]></description>
</item>
<item>
<title>Manage your AWS Aurora databases with Navicat</title>
<link>https://www.navicat.com/company/aboutus/blog/664-manage-your-aws-aurora-databases-with-navicat.html</link>
<description><![CDATA[<b>September 18, 2017</b> by Gavin<br/><br/><p>Now you have the availability of Navicat for Amazon Aurora Databases. You can now handle Amazon Aurora Databases via Navicat, the sturdiest powerful database manager, Graphical User Interface and admin tool.</p><p><b>Amazon Aurora</b> is a MySQL compliant, interactive database engine which interconnects the availability and speed of top notch commercial databases with easiness and cost efficiency of open source databases.</p><p><b>Navicat Premium</b> is one of the most prominent database management solutions for database development on all the leading platforms like Linux, Windows and MacOS. It permits you to connect to SQLite , SQL Server, Oracle, MySQL, PostgreSQL and MariaDB databases from just one app. It is basically installed on your computer and interconnects not just on-premises databases but also cloud database like Amazon Redshift, Amazon Aurora as well as Amazon RDS.</p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20170918/navicat-amzon-aurora.jpg" style="max-width: 100%;"></td></tr><p>Amazon eradicates the requirement of set up, operation and measure a relational database, permitting the users to concentrate on the database design and efficient management. Along with Amazon instance, Navicat gives you a top notch end to end database development experience.</p><h1 class="blog-sub-title">Manage AWS Aurora Database with Navicat on Windows as well as Mac.</h1><ul style="list-style-type:decimal" class="blog-list"><b><li>Go to Navicat Premium and Click File > New Connection > Amazon AWS and then finally Amazon Aurora.</li></b><b><li>Type in a suitable name that best describes your connection in the text box of Connection Name.</li></b><b><li>When you choose to Navicat Cloud feature, you can select to save the connection to My Connections or the assignments on Cloud from drop-down list of Add To. If you have chosen My Connections, then the settings are stored in the local device.</li></b><b><li>Type in the endpoint details of the cluster in the Endpoint and Port columns.</li></b><b><li>Type in your username and password.</li></b><b><li>Check your connection.</li></b></ul><h1 class="blog-sub-title">Migration</h1><p>Navicat renders you with an interactive and powerful GUI and provides a set of full-fledged specs for Amazon database development as well as maintenance. To enhance the efficacy and productivity for your database, it's Data Transfer feature assists you to transfer the data across several DBMS  local to SQL file, local to cloud and local to local. It allows you to perform automation work at periodic intervals and send alerts mails to mentioned recipients on job completion to assure your migration is well submitted and 100% successful.</p><h1 class="blog-sub-title">Amazon Web Services</h1><p>Amazon Web Services renders a complete bunch of cloud based products for your developmental needs and business requirements. It provides easy access to and simple handling of the services via a communicative web based user interface. Big companies and organizations rely on Amazon AWS because of its reliability and service level registrations. </p><p>Some of the <b>features offered by Navicat</b> are mentioned below:</p><ul style="list-style-type:disc" class="blog-list"><b><li>Unified data migration</li></b><b><li>Diversified manipulation equipment</li></b><b><li>Quick and reliable SQL editing</li></b><b><li>Smart Database designer</li></b><b><li>Productivity increment</li></b><b><li>Simple and easy collaborations</li></b><b><li>Enhanced secure connection</li></b></ul><p>With that being said, you can manage your AWS Aurora databases efficiently with navicat. It is reliable, fast, simple and convenient to handle. You will no longer need to go for anything else. </p>]]></description>
</item>
<item>
<title>How to optimize the MySQL Server</title>
<link>https://www.navicat.com/company/aboutus/blog/663-how-to-optimize-the-mysql-server.html</link>
<description><![CDATA[<b>September 4, 2017</b> by Gavin<br/><br/><p>There are optimization techniques for database server, majorly managing system configuration rather than tweaking SQL statements. It is suitable for DBAs who desire to assure performance as well as scalability over the servers they handle, for developer initiating installation scripts which comprise of establishing the database and for those running MySQL themselves for development, testing and more to enhance their productivity. </p><tr><td valign="middle"><img src="https://www.navicat.com/link/Blog/Image/20170904/1200px-MySQL.svg_.png" style="max-width: 100%;"></td></tr><h1 class="blog-sub-title">System Factors</h1><p>Some system level aspects also impact the performance in a great way:</p><p><b>If you have sufficient RAM,</b> you can get rid of all swap devices. Often OS use a swap device in some regards regardless of having free memory.</p><p><b>Ignore exterior locking for MyISAM tables.</b> The default is for exterior locking to be restricted. The exterior locking and skip exterior locking alternatives unambiguously enable and disable exterior locking. Disabling exterior locking doesn't impact MySQL functionality till the time you run just one server. Make sure you take down the server before running myisamchk. On few systems, it is important to disable exterior locking because it won't work. </p><p>You cannot disable the external locking when you run several MySQL servers on the similar data, or when you run myisamchk to evaluate a table without seeing the server to level and lock the tables primarily. Remember that making use of several MySQL servers to evaluate the same data synchronously isn't usually recommended, except when you're utilizing NDB cluster.</p><h1 class="blog-sub-title">Optimizing Disk I/O</h1><p>It shows pathways to organize storage devices when you can devote better and quicker storage hardware to the database server.</p><p>Disk seeks are a big performance blockage. The issue becomes more obvious when the data amount commences to grow so big that efficient caching gets impossible. </p><p>Enhance the number of present disk spindles by symliking files to other disks or disks striping.</p><p>A good idea is to differ the RAID level according to the critical kind of data. </p><h1 class="blog-sub-title">Using NFS with MySQL</h1><p>You need to be cautious when thinking of using NFS with MySQL. Possible problems which differ by OS and NFS version comprise of:</p><ul style="list-style-type:disc" class="blog-list"><li>Log files and MySQL data files sited on NFS volumes get locked and are unavailable for usage.</li><li>Data inconsistencies produced because to messages received out of order or poor network traffic. To get rid of it, make use of TCP with hard and intr mount support.</li><li>High file size restrictions.</li></ul><h1 class="blog-sub-title">Use symbolic links</h1><p>You can shift database from database directory to any other place or replace it with symbolic links to a new place. You may wish to do this, for instance, to shift a database to a file system with higher free space or enhance your system's speed by spreading your table to varying disks.</p><p>The suggested idea to do it is to symlink complete database directories to a separate disk. Symlink MYISAM tables just as a last choice. </p><p><b>1.You can use symbolic links for databases on Unix<br>2.You can use symbolic links for MyISAM tables on Unix</br>3.You can use symbolic links for databases on Windows</b></p>]]></description>
</item>
</channel>
</rss>