top of page

News

Dec 12, 2024
T-Oct 9, 2024

​

December 12, 2024: 

Datalore 2024.5 Is Out With SQLite Support and an Improved Reporting Experience

Datalore 2024.5 introduces SQLite support and an improved reporting experience that includes PDF export capabilities, new filters in interactive tables, and much more. Datalore Cloud customers have already received the new updates automatically, and Datalore On-Premises customers can upgrade by following these instructions.

SQLite database support

Datalore now offers native support for SQLite! You can get started by selecting a database file and attaching it to your notebooks, and then you can connect to this file using SQL cells. This allows you to enjoy the speed of SQLite with the unparalleled experience of our SQL cells.

Learn more about SQLite support in Datalore.

External agent support in Datalore On-Premises

Datalore’s On-Premises version now supports external agents in computation management.

This allows you to turn any machine with Docker into a source of agents for Datalore. In other words, if you need additional computational resources, you can connect any local physical or cloud machines (from any cloud provider) to your Datalore instance.
Learn more about external computational agents.

Improved reporting experience

Cascading dropdowns with dynamic lists now have default values.

Complex reports often have cascading dependencies between dynamic inputs. We’ve improved the reporting experience by implementing default values on dropdown value changes. This can help users of your reports make more informed selections quickly. For example, it can be useful if users need to drill down through hierarchical data, such as when they select a region, followed by a country, and then a city. By providing default values, the report can guide them through the most common or recommended paths, making interaction with your reports in Datalore more intuitive and efficient.

Improved interactive table functionality

We’ve added extra filters (for example, Not equals) and the ability to use search-and-replace for data in interactive tables. This can be particularly useful when you are exporting or copying table data.

Other noteworthy improvements for reports and notebooks

  • We’ve significantly improved the functionality for exporting reports as PDFs. Try the new experience!

  • Static reports now allow you to copy their tables and charts to the clipboard and paste them into your preferred editor for further analysis.

  • The interface of the Attached data tool in the editor is now more ergonomic and easier to use. Let us know how you like it.

Administration, security, and performance enhancements

  • A new SQL connection method speeds up the execution of SQL cells. We recommend enabling it for the supported databases to enjoy faster query performance in notebooks and reports.

  • Admins can now configure team-wide cloud bucket storages that will be available automatically across all workspaces. 

  • Customers using JetBrains Hub can now have their user emails synced automatically.

  • The permissions for editing notebooks and sharing workspaces have now been separated under different roles. This gives you more flexibility in permission management.

  • We’ve further improved the security of Datalore as part of our commitment to a highly secure collaborative notebook environment, evidenced, for example, by our SOC2 certification.

  • Auditing just got better with the option to configure the logging level for agents.

 

Datalore Cloud customers have already received these updates automatically. 

For Datalore On-Premises customers, it is important to back up the database that stores all the Datalore content before upgrading to the 2024.5 version.

​

December 9, 2024:

​New! Security Analysis Plugin for IntelliJ IDEA 2024.3.1

Application security is a critical requirement in modern software development. The latest release of IntelliJ IDEA 2024.3.1 – along with the optional new Security Analysis by Qodana plugin – elevates your security efforts with advanced interprocedural data flow analysis capabilities for code written in Java and Kotlin.

Trained on popular OWASP security benchmark projects, this new feature allows you to detect and address critical security vulnerabilities, such as cross-site scripting, command injections, SQL injections, and path traversal issues, in real time as you type.

This functionality is available in IntelliJ IDEA Ultimate starting from v2024.3.1 with the Security Analysis by Qodana plugin installed. To get started, open the Problems view, switch to the brand-new Security Analysis tab, and then install the required plugin.

Once the plugin is installed, you’re good to go. It will highlight problems in your editor automatically. Additionally, you can explore the built-in demo example in your Java or Kotlin project or launch taint analysis in batch mode over your whole codebase to identify places with potentially vulnerable code.

Security analysis matters more than ever

The Security Analysis by Qodana feature delivers taint analysis capabilities to your editor to help identify vulnerabilities in your application.

It traces how data flows through your code, highlighting areas where potentially harmful user input might reach vulnerable functions (sinks). Such dataflows could be exploited, potentially compromising your application’s security depending on the type of vulnerability.

With the release of IntelliJ IDEA 2024.3.1 and the Security Analysis by Qodana plugin, we’re helping address specific issues from the OWASP Top 10, which highlights the most critical security risks faced by modern applications. By running taint analysis with the new optional plugin in IntelliJ IDEA, you can proactively improve both the quality and security of your codebase.

Lock down codebase security

In the intentionally vulnerable Spring Boot sample above, you can see potentially malicious user input from request flows into an SQL query without proper sanitization. Attackers could exploit this vulnerability to compromise the whole system. 

Taint analysis in your team’s CI/CD pipeline with Qodana

Bring enhanced security to your team’s project with taint analysis in your CI/CD pipeline. This feature is available for code written in Java and Kotlin when using the 2024.3 version of the Qodana linter. 

​

December 2, 2024:

Lowering Prices for TeamCity Pipelines | TeamCity Pipelines Pulse, Issue #10​

Great news – we’re cutting the cost of TeamCity Pipelines! Now, the prices start from USD 15/month. Check out all the pricing options on our website.

Why the change? Because we’re committed to making modern, reliable CI/CD solutions affordable for teams of all sizes. Building and shipping great software shouldn’t break the bank.

New UI

With this release, we’ve given TeamCity Pipelines a fresh look! The main menu has moved to the left-hand side of the interface, making it easier to navigate and keeping all the important stuff right at your fingertips.

Bug fixes and improvements

We’ve also fixed some bugs to make your TeamCity Pipelines experience smoother. Here’s what’s new:

  • There’s no longer an issue with pipelines appearing empty when clicking the Edit button.

  • Smooth parameters are working as intended for custom JDK paths or Dockerfile paths. You can now define these fields using parameters seamlessly – no more manual adjustments needed!

  • The Maven runner now allows you to specify a custom path for Maven installation in pipeline jobs.

 

November 18, 2024:

Qodana Case Studies: How Moovit Prevents Production Incidents With Code Analysis by Qodana​

Moovit – a popular commuter app serving 1.5 billion users in over 3,500 cities – has become a critical part of people’s daily transit since its inception in 2012. 

As with any large-scale application, the Moovit team has to maintain a clean and sustainable codebase to limit outages and ensure a smooth and effective service for its widespread user base. This also had to be achieved within a team that included users of Java, Spring, Jenkins, and IntelliJ IDEA. 

Technically, this requires:

  • Null pointer detection and finding other flaws that can cause outages.

  • Reducing production issues in the development stage.

  • Enforcing critical code rules, team-wide.

  • Enhancing developer productivity and adopting best practices.

Paving the road to production readiness with Qodana

After careful deliberation, Moovit selected Qodana. They were impressed by its strong integration capabilities, competitive pricing, and high level of customization – a combination of features they couldn’t find with another provider. Once these important boxes were checked, the Moovit team used Qodana to:

  • Help run quick scans for early issue detection.

  • Create custom rules and configurations suited to their use case. 

  • Improve scalability and cost-effectiveness throughout the development process. 

The result?

“In the long term, it has stabilized our production system and allowed developers to avoid fixing issues too late in the pipeline. We’re seeing the shift of the paradigm, where development teams follow the policies and save a few sleepless nights with Qodana.” (Amit Weinblum, Infrastructure Team Leader at Moovit)

View the official case study below for more information on the project and how Moovit benefited from prioritizing code quality with Qodana and IntelliJ IDEA. You can also view other Qodana case studies for more information on how your team can benefit from Qodana.

View Case Study

​

November 13, 2024:

Datalore On-Premises or Cloud: Which Suits You Best?​

In an era where data is the new currency, the ability to quickly gain actionable insights can be a game-changer for businesses and research institutions alike. Shortening the feedback loop between data scientists, analysts, and business intelligence teams can lead to more agile and responsive strategies, ultimately speeding up innovation and optimizing operations. 

JetBrains has always been at the forefront of this challenge, delivering best-in-class tools, including Datalore – the collaborative data science platform for analysts, business teams, and anyone else who needs quicker insights from their data.

One of the advantages of Datalore is that it offers two different operational models: On-Premises and Cloud. In this post, we will consider a few advantages of each model and cover the most typical challenges organizations might face when adding a new tool to their daily portfolio.

When is Datalore On-Premises preferable?

Datalore On-Premises is a self-managed installation in the environment of your choice – a private cloud, a public cloud, or even your own bare-metal server.

Working with internally hosted data

Many companies host their databases fully on-premises instead of migrating them offsite. The reasons vary, from compliance factors to cost savings. However, this can lead to a problem. If the data is hosted locally, but the service that needs this data is located somewhere outside of the corporate perimeter, then the data becomes inaccessible to the service. 

That said, on-premises deployments of data-consuming or processing services are the best solution in cases where you’re working with internally hosted data, as you have full control over the networking and security aspects. This allows you to customize your configuration without jeopardizing any security measures your organization has in place.

Extended compliance requirements

Considering the nature of your data is important when choosing the right tool for processing it, as specific industries may impose additional requirements for data handling systems. 

For example, if a US-based organization wants to process health-related data, compliance with HIPAA (the Health Insurance Portability and Accountability Act) is required, while compliance with PCI DSS is necessary in the global financial sector. These requirements are often eventually mandated by law or industry standards to apply to both the product and the organization as a whole.

In certain cases, as long as the data doesn’t leave the organizational perimeter, the product itself doesn’t have to undergo the whole process of vetting, testing, and certification by an independent third-party authority, like the Office for Civil Rights or the National Institute of Standards and Technology. 

If you work in a context with extensive compliance standards, on-premises deployment is preferable. Otherwise, your choice of tool vendor becomes significantly limited, as both the tool and the vendor need to be in compliance and hold the necessary certifications, which are expensive and difficult to obtain.

JetBrains is committed to maintaining the highest level of security when it comes to our data. An annual review by our external auditors recently confirmed our SOC 2 Type II compliance status.

Specific environment requirements

Another case where on-premises installations are particularly suitable is when there’s a high demand for customization, which is often something that SaaS platforms either can’t provide or can only provide in a limited capacity.

Here’s a story from one Datalore customer who decided to go with an on-premises deployment:

 

By using Datalore On-Premises, we can customize the environment using Linux shell scripts built into the agent image used by Datalore. We can also install our own packages using pip, Poetry, dependency files, and more without any restrictions. This reduces the environment bootstrapping time, which is essential for us as a fast-paced team.

Get Datalore On-Premises demo

When is Datalore Cloud preferable?

Datalore Cloud is our software-as-a-service offering, managed and operated by JetBrains.

No-ops strategy

Depending on your organizational goals and priorities, it may make more sense to completely avoid having anything on-premises, including servers and data storage. Instead, you can use managed services by various cloud providers, allowing you to focus on your daily tasks rather than worrying about infrastructure management.

Datalore Cloud is particularly advantageous for organizations following a no-ops strategy because it eliminates the need for dedicated IT staff to manage hardware or software updates. Additionally, its extensive list of machines provides workload scaling capabilities, ensuring optimal performance as data workloads grow and reducing your organization’s operational burden.

Starting your data journey

When a team begins a project, they usually need to choose their infrastructure and tooling, a process that can be lengthy enough to have a visible impact on their timeline.

Datalore Cloud speeds this process along because the only thing you need to start using it for data exploration is your browser. It also comes with a no-commitment 14-day free trial, allowing you to easily determine whether it meets your needs.

Once you’ve signed up for Datalore Cloud, you’re ready to explore your data immediately. With any of the paid Datalore Cloud tiers, you get 750 hours of computation time using 4 vCPUs and 16 GB of RAM (2 vCPUs and 4 GB of RAM for free tier users). We’ve found that these resources are sufficient in about 90% of cases, but if you need more, you can scale up with just a single click. Datalore Cloud has an extensive list of machine options that will suit even the most demanding users.

Flexibility

Your company’s tooling landscape can change rapidly, as your business requirements evolve together with your team. Because of this, it may not be wise to commit to the fixed, long-term seat capacity offered by Datalore On-Premises. 

For Datalore Cloud, you have more flexibility in terms of seat capacity adjustments, with an option to scale your team’s capacity based on demand and your current requirements. Additionally, having the flexibility to choose between monthly and discounted annual commitments is a plus.

Another important aspect in choosing between the deployment models is the pricing structure. On-premises solutions typically carry an infrastructure setup burden, both on hardware and people, that increases its total cost of ownership.

Given the above, Datalore Cloud might be more beneficial if you have a demand for computation-intensive tasks but you either don’t have the expensive hardware required or don’t want to invest heavily into it. In that case, Datalore Cloud offers state-of-the-art environments prepared with all of the necessary resources at a fraction of what the hardware would cost.

​

November 6, 2024:

Updates on Unreal Engine Support in TeamCity: UGS Integration and Open-Sourcing the Plugin​

We’ve got a few exciting updates about the Unreal Engine plugin announced in the previous blog post.

TL;DR – we’re adding Unreal Game Sync (UGS) integration and open-sourcing the plugin. These updates are all about making the CI/CD experience smoother for Unreal Engine devs and getting the community more involved.

UGS

Before diving in, let’s quickly go over what Unreal Game Sync (UGS) is for anyone who might not be familiar with it or could use a refresher. In essence, UGS is a lightweight UI for Perforce. Typically, you need to build it from source to get started, and while its graphical client is a WinForms application available only on Windows, there is a command-line interface (CLI) version for other platforms. UGS has been around for a while and is widely used by game studios working with Unreal Engine as a collaboration tool.

From a CI/CD perspective, UGS provides valuable insights into a project’s status (if properly set up), such as build statuses, the ability to flag specific changelists as problematic, and more. To give a better overview, here’s a rough diagram of the components involved:

There are quite a few components here, with the central one being the Metadata Server. While deploying it isn’t strictly necessary, it does enable the full feature set of UGS. This is also where CI/CD systems post build information. As shown, there are different possible implementations of the Metadata Server, and it’s worth briefly discussing each:

  • Epic Metadata Service. This is the original and longest-standing version of the Metadata Server. It requires Windows, IIS, and the older .NET Framework 4.6.2.

  • Third-party implementation. Thanks to the open-source nature of the server, it’s possible to create your own implementation. One example is RUGS, which is much easier to set up since it supports Docker.

  • Horde. Technically, this is a full-fledged automation platform recently introduced by Epic. It includes a built-in UGS Metadata Server as well as its own build system. Although it has a built-in metadata server, it doesn’t allow publishing from external sources – the transition to Horde assumes that all metadata is generated internally. Horde is a bit outside the scope of this blog post, so we’re only mentioning it for the sake of completeness.

Entities that the build system is supposed to post to the metadata server are called “badges” in UGS terms. These badges will then show up in the CIS (continuous integration status) column in UGS. It usually looks like this:

As far as we know, the metadata server endpoints don’t currently have authentication. It appears that the server is intended to be used within a secure, closed network, but this is just our understanding and not an official statement.

For a more complete definition of UGS please refer to the official documentation.

UGS Integration in TeamCity

Let’s take a look at UGS integration in TeamCity. As of the most recent plugin update, we support these two scenarios:

  1. Publishing a badge as a build status via the Commit Status Publisher.

  2. Publishing an arbitrary set of badges defined in your BuildGraph script.
    This applies to the “distributed” execution mode – a special runner mode in which the BuildGraph definition of the build is converted into a set of builds in TeamCity (build chain). For more details, please refer to our previous blog post or the plugin documentation.

The first scenario is pretty straightforward. You only need to configure the Commit Status Publisher build feature and set up a few required parameters.

The second scenario is more complex. In your script, you can define a set of badges and link them to specific nodes to be tracked. Before diving into the scripts, here’s a quick reminder of how the plugin maps BuildGraph entities to TeamCity entities:

BuildGraphTeamCity

NodeBuild step

AgentBuild

For example, if your build process includes compiling an editor, the script might look like this (with unimportant details omitted):

<Agent Name="Build Editor and tools" Type="...">

<Node Name="Set binary version">

...

</Node>

<Node Name="Compile Tools" Requires="Set binary version">

...

</Node>

<Node Name="Compile Editor" Requires="Compile Tools">

...

</Node>

</Agent>

<Badge Name="Compile Editor" Project="//UE5/Main/Samples/Games/Lyra" Requires="Compile Editor"/>

Here, we define a badge named “Compile Editor” to track the execution of a node with the same name. In distributed BuildGraph mode, TeamCity will recognize this badge and update the build status as the process progresses.

You can define multiple badges to track different sets of nodes, and TeamCity will monitor all of them based on the specified dependencies:

<Agent Name="Build A" Type="A">

<Node Name="Node 1">

...

</Node>

</Agent>

<Agent Name="Build B" Type="B">

<Node Name="Node 2">

...

</Node>

</Agent>

<Agent Name="Build C" Type="C">

<Node Name="Node 3">

...

</Node>

</Agent>

<Badge Name="BuildProject" Project="//foo/bar/project" Requires="Node A;Node B;Node C"/>

In this example, there are three agents (each with a single node) that can potentially run concurrently, as they are assigned to different agents and have no dependencies on each other. Each build is tracked by a corresponding badge.

The badge will behave as follows:

  • “Starting” – displayed as soon as any tracked dependency begins execution.

  • “Success” – shown when all dependencies complete successfully.

  • “Failure” – Indicated if any dependency encounters an error.

For complete examples, please refer to the plugin’s user guide on GitHub.

Open-sourcing the Plugin

We have received a lot of feedback since the plugin was introduced in May this year. Thank you to everyone who shared ideas for further development, submitted feature requests, or reported bugs! We’ve also been asked several times whether we’re going to open-source the plugin and, if so, when. That time is now!

With this step, we hope to:

  • Increase transparency and trust in the plugin’s codebase.

  • Engage the community for contributions and improvements.

  • Speed up bug fixes and feature implementations.

The source code is now available on GitHub and the latest release is ready for download on the marketplace. 

​

October 9, 2024:

​TeamCity Pipelines Is Now Out of Beta

In March 2024, we announced the Beta release of TeamCity Pipelines, a new approach to CI/CD that offers blazing-fast pipelines to optimize your development flow. After six months of fine-tuning, adding features, and gathering feedback, we’re excited to announce that TeamCity Pipelines is officially going GA (General Availability) ?

This is a major milestone for us, and we couldn’t have done it without your insights and support throughout the journey.

Get your TeamCity Pipelines instance

TeamCity Pipelines is packed with exciting new features and ready to handle your CI/CD workflows with ease. Let’s take a closer look at what’s new.

What’s new in TeamCity Pipelines

Powerful YAML functionality

In TeamCity Pipelines, you can build pipelines visually or with YAML. With YAML autocompletion, real-time suggestions help you write pipelines faster and with fewer mistakes – like having a CI/CD co-pilot by your side!

Visual drag-and-drop CI/CD pipeline editor

Imagine Figma or Miro, but for CI/CD – you can easily define job dependencies, reorder tasks, and map out your pipelines visually with TeamCity Pipelines’ drag-and-drop editor.

It’s not just about making pipelines look pretty (although they do look fantastic, don’t they?) – it’s about making the entire process more intuitive and efficient. Even if you’re not a YAML expert, you’ll be able to create and edit pipelines with ease.

Dependency cache

In addition to other pipeline optimization features that speed up builds by up to 40%, we’ve added the dependency cache option. The first time you run your Maven builds, TeamCity Pipelines will cache those dependencies. In future builds, it reuses that cache, meaning faster builds and less load on your infrastructure.

Self-hosted agents

With self-hosted agents, you can now hook up your own build machines to TeamCity Pipelines. Whether you’re using your own data center or cloud infrastructure, this gives you the flexibility to leverage your existing hardware, maintain security, and scale as needed.

Agent terminal

If you need to check the environment of an agent that runs your build, the agent terminal feature is what you need. You can open the terminal and connect it directly to the agent during a job to view logs, check installed software, or debug issues – all from the UI.

The newest additions

We release a new version of the product every three weeks. With the latest update, we’ve added some pretty cool features. Here’s what’s new.

New VCS providers

In addition to GitHub, you can now also create pipelines for your GitLab and Bitbucket projects.

Clear indications why a job doesn’t start

Sometimes due to a misconfiguration, the job will never star. Perhaps there are no compatible agents that meet the set requirements or they’re all busy at the moment.

Now, TeamCity Pipelines provides a clear explanation of why exactly the build doesn’t start and what you can do about it to get your builds up and running.

Self-hosted agents: improvements

Along with JetBrains-hosted agents, you can run jobs on self-hosted agents and set requirements like OS, CPU count, architecture, RAM, or custom specs.

In this iteration, we added grouping available agents by the OS type. If you set agent requirements that can’t be met, TeamCity Pipelines will also let you know.

Gradle cache dependency

The Enable dependency cache option lets TeamCity cache dependencies from the first pipeline run and is now available for the Gradle runner too.

Learn more about what’s new in TeamCity Pipelines in our regular Pulse newsletter.

​

October 9, 2024:

​Qodana 2024.2 Release: Enhanced Android Linter, Hard-coded Password Detection, and More

​

​Get ready for new features and improvements with our latest Qodana 2024.2 release! This update introduces major enhancements that boost security, extend compatibility, and provide more convenience for development teams looking to increase code quality.

Hardcoded password detection

Hardcoded passwords can be a critical security issue because, when exposed, attackers can use them to leak data and access sensitive information – especially in open-source or distributed applications. Passwords should be stored in a more secure environment. This is why we’ve introduced a new inspection to help you find these passwords in your code: HardcodedPasswords.

By default, the inspection reports variables with values matching built-in regex rules for hardcoded passwords. It currently supports JavaScript, TypeScript, Java, Kotlin, Groovy, PHP, Go, Python, C#, JSON, YAML, and XML.

How to detect hardcoded passwords, report variables with suspicious names, and implement your own regex rules

This inspection is enabled by default in Qodana. If you have a custom inspection profile, you should enable the inspection in your qodana.yaml profile as follows: 

include:

- name: "HardcodedPasswords"

Example of a hardcoded password inspection result.

Additionally, you can view reported variables with suspicious names, like token or password, and a constant string value. To enable this option, add the following configuration in qodana.yaml:

hardcodedPasswords:

reportDefaultSuspiciousVariableNames: true

You can also implement your own regex rules to detect and report or ignore hardcoded variable values or names, offering you the opportunity to set up flexible custom rules.

Here’s an example of a custom rules configuration in qodana.yaml:

hardcodedPasswords:

# regex rules for variable values to report as hardcoded password

variableValues:

- "(?i)(xoxe-\d-[A-Z0-9]{146})"

- "perm:(?<clearSecret>[a-zA-Z=.0-9]{96}|[a-zA-Z=.0-9]{64})"

# regex rules for variable names to report as hardcoded password

variableNames:

- "youtrack"

# regex rules for variable values to ignore (not report) as hardcoded password

ignoreVariableValues:

- "do-not-report-this-value"

# regex rules for variable names to ignore (not report) as hardcoded password

ignoreVariableNames:

- "^(?=.*\bteamcity\b)(?=.*\bkey\b).*$"

Watch the video above to see hardcoded password detection in action.

New features and inspections for the Qodana-Android linter

With this release, our new qodana-android linter offers advanced features from the Qodana Ultimate plan, tailored specifically to Android projects. These include, but are not limited to, code coverage analysis and vulnerability checks. This enables you to get:

  • Comprehensive inspections: The qodana-android linter includes all inspections from the community version, plus additional inspections provided by the qodana-jvm linter. This ensures you can identify both Android-specific issues and language-related problems within your projects.

  • Enhanced analysis: By leveraging Qodana Ultimate features, you can perform deeper analysis on your Android projects, improving code quality (with the Code Coverage inspection) and security (with the Vulnerability Checker).


Get started with the qodana-android linter by specifying it in your qodana.yaml file:

linter: jetbrains/qodana-android:2024.2

Or use the following in the CLI:

--linter jetbrains/qodana-android:2024.2

Please note that inspections contained in external libraries and user-written inspections aren’t supported in this version.

With the qodana-android linter, you can now take your Android project analysis to the next level, ensuring that both Android-specific and general code quality issues are addressed.

Quick-fixes in Qodana for .NET – flagged, found, fixed!

The qodana-dotnet linter now supports quick-fixes, making it easier to improve your code automatically. You can choose between the cleanup and apply strategies to handle identified  issues effectively. This means your team can enjoy: 

  • Automatic code improvement: Similar to other Qodana linters, quick-fixes in qodana-dotnet help streamline the process of fixing issues in your code where automatic corrections are possible.

  • Flexible strategies: The linter supports both the cleanup and apply strategies, giving you control over how fixes are implemented.

How to use quick-fixes

To run the qodana-dotnet linter with the apply strategy, use the following command:

docker run \ -v $(pwd)/:/data/project/ \ -e QODANA_TOKEN="your Qodana Cloud Token" \ jetbrains/qodana-dotnet:2024.2 \ --apply-fixes # use --cleanup for cleanup strategy

Inspection changes and updates

  • The qodana-php linter now includes JavaScript inspections by default, allowing analysis of PHP and JavaScript/TypeScript projects at the same time.

  • Android lint inspections are now integrated into the qodana-android-community and (new) qodana-android linters to address Android performance and security issues, improving performance and security and helping you find new bugs.

  • The qodana-clang linter is now available with different clang-compiler versions (15–18) so you can choose which version you want to use. The default linter qodana-clang:latest uses version 16 of the Clang compiler. To run it with a different version, you’ll  need to specify it in the linter’s name:

linter: jetbrains/qodana-clang:2024.2-eap-clang<version> #(for example, jetbrains/qodana-clang:2024.2-eap-clang18)

Additionally, numerous inspection changes have been made to help identify potential coding errors, such as comparing an expression to itself, to help you make your code more robust.

​​​

September 24, 2024:​​

PyCharm vs. Jupyter Notebook

Jupyter notebooks are an important tool for data scientists, providing an easy option for conducting experiments and presenting results. According to our Developer Ecosystem Survey 2023, at least 35% of data professionals use Jupyter notebooks. Furthermore, over 40% of these users spend more than 20% of their working time using these resources.

There are several implementations of notebook technology available to data professionals. At first we’ll look at the well-known Jupyter Notebook platforms by Project Jupyter. For the purposes of this article, we’ll refer to the Project Jupyter implementations of notebooks as “vanilla Jupyter” in order to avoid confusion, since there are several other implementations of the tool.

While vanilla Jupyter notebooks can be sufficient for some tasks, there are other cases where it would be better to rely on another tool for working with data. In this article, we’ll outline the key differences between PyCharm Professional and vanilla Jupyter when it comes to data science applications.

What is Jupyter Notebook?

Jupyter Notebook is an open-source platform that allows users to create and share code, visualizations, and text. It’s primarily used for data analysis and scientific research. Although JupyterLab offers some plugins and tools, its capabilities and user experience are significantly more limited than PyCharm’s.

What is PyCharm Professional?

PyCharm is a comprehensive integrated development environment (IDE) that supports a wide range of technologies out of the box, offering deep integration between them. In addition to enhanced support for Jupyter notebooks, PyCharm Professional also provides superior database support, Python script editing, and GitHub integration, as well as support for AI Assistant, Hugging Face, dbt-Core, and much more. 

Feature comparison: PyCharm Pro vs. Jupyter

Language support

While Jupyter notebooks claim to support over 40 programming languages, their usage is limited to the .ipynb format, which makes working with traditional file extensions like .py, .sql, and others less convenient. On the other hand, while PyCharm offers support for fewer languages – Python, JavaScript and TypeScript, SQL, and R (via plugin), along with several markup languages like HTML and CSS – the support is much more comprehensive.

Often, Jupyter notebooks and Python scripts serve different purposes. Notebooks are typically used for prototyping and experimentation, while Python scripts are more suitable for production. In PyCharm Professional, you can work with both of these formats and it’s easy to convert .ipynb files into .py files. See the video below for more information.

The smartest code completion

If you’ve ever written code in PyCharm Professional, you’ll have definitely noticed its code completion capabilities. In fact, the IDE offers several different types of code completion. In addition to the standard JetBrains completion, which provides suggestions based on an extensive understanding of your project and libraries, there’s also runtime completion, which can suggest names of data objects like columns in a pandas or Polars DataFrame, and ML-powered, full-line completion that suggests entire lines of code based on the current file. Additionally, you can enhance these capabilities with LLM-powered tools such as JetBrains AI Assistant, GitHub Copilot, Amazon Whisper, and others.

In contrast, code completion in Jupyter notebooks is limited. Vanilla Jupyter notebooks lack awareness of your project’s context, there’s no local ML-based code completion, and there’s no runtime code completion for database objects.

Code quality features and debugger

PyCharm offers several tools to enhance your code quality, including smart refactorings, quick-fixes, and AI Assistant – none of which are available in vanilla Jupyter notebooks. 

If you’ve made a mistake in your code, PyCharm Professional will suggest several actions to fix it. These become visible when you click on the lightbulb icon.

PyCharm Professional also inspects code on the file and project level. To see all of the issues you have in your current file, you can click on the image in the top right-hand corner.

While vanilla Jupyter notebooks can highlight any issues after a code cell has been executed (as seen below), it doesn’t have features that allow you to analyze your entire file or project.   

PyCharm provides a comprehensive and advanced debugging environment for both Python scripts and Jupyter notebooks. This debugger allows you to step into your code, running through the execution steps line by line, and pinpointing exactly where an error was made. If you’ve never used the debugger in PyCharm, you can learn how to debug a Jupyter notebook in PyCharm with the help of this blog by Dr. Jodie Burchell. In contrast, vanilla Jupyter offers basic debugging tools such as cell-by-cell execution and interactive %debug commands.

Refactorings

Web-based Jupyter notebooks lack refactoring capabilities. If you need to rename a variable, introduce a constant, or perform any other operation, you have to do it manually, cell by cell. In PyCharm Professional, you can access the Refactoring menu via Control + T and use it to make changes in your file faster. More information about refactorings in PyCharm you can find in the video.

Other code-related features

If you forget how to work with a library in vanilla Jupyter notebooks, you need to open another tab in a browser to look up the documentation, taking you out of your development environment and programming flow.

In PyCharm Professional, you can get information about a function or library you’re currently using right in the IDE by hovering over the code.

If you have a subscription to AI Assistant you can also use it for troubleshooting, such as asking it to explain code and runtime errors, as well as finding potential problems with your code before you run it.

Working with tables

DataFrames are one of the most important types of data formats for the majority of data professionals. In vanilla Jupyter notebooks, if you print a pandas or Polars DataFrame, you’ll see a static, output with a limited number of columns and rows shown. Since the DataFrame outputs in Jupyter notebooks are static, this makes it difficult to explore your data without writing additional code.

In PyCharm Professional, you can use interactive tables that allow you to easily view, navigate, sort, and filter data. You can create charts and access essential data insights, including descriptive statistics and missing values – all without writing a single line of code. 

What’s more, the interactive tables are designed to give you a lot of information about your data, including details of: 

  • Data type symbols in the column headers

  • The size of your DataFrame (in our case it is 2390 rows and 82 columns). 

  • Descriptive statistics and missing values and many more. 

If you want to get more information about how interactive tables work in PyCharm, check out the documentation.

Versioning and GitHub integration

In PyCharm Professional, you have several version control options, including Git. 

With PyCharm’s GitHub integration, you can see and revert your changes with the help of the IDE’s built-in visual diff tool. This enables you to compare changes between different commits of your notebooks. You can find an in-depth overview of the functionality in this tutorial.

Another incredibly useful feature is the local history, which automatically saves a version history of your changes. This means that if you haven’t committed something, and you need to roll back to an earlier version, you can do so with the click of a button.  

In vanilla Jupyter notebooks, you have to rely on the CLI git tool. In addition, Git is the only way of versioning your work, meaning there is no way to revert changes if you haven’t committed them.

Navigation

When you work on your project in Jupyter Notebook, you always need to navigate either within a given file or the whole project. On the other hand, the navigation functionality in PyCharm is significantly richer.

 Beyond the Structure view that is also present in JupyterLab, you can also find some additional features to navigate your project in our IDEs. For example, double pressing Shift will help you find anything in your project or settings.

In addition to that, you can find the specific source of code in your project using PyCharm Professional’s Find UsagesGo to ImplementationGo to Declaration, and other useful features. 

Check out this blog post for more information about navigation in Jupyter notebooks in PyCharm.

Visualizations

In addition to libraries that are available in vanilla Jupyter notebooks, PyCharm also provides further visualization possibilities with the help of interactive tables. This means you don’t have to remember or type boilerplate code to create graphs. 

How to choose between PyCharm and vanilla Jupyter notebooks

Vanilla Jupyter notebooks are a lightweight tool. If you need to do some fast experiments, it makes sense to use this implementation. 

On the other hand, PyCharm Professional is a feature-rich IDE that simplifies the process of working with Jupyter notebooks. If you need to work with complex projects with a medium or large codebase, or you want to add a significant boost to productivity, PyCharm Professional is likely to be more suitable, allowing you to complete your data project more smoothly and quickly .

Get started with PyCharm Professional

PyCharm Professional is a data science IDE that supports Python, rich databases, Jupyter, Git, Conda, and other technologies right out of the box. Work on projects located in local or remote development environments. Whether you’re developing data pipelines, prototyping machine learning models, or analyzing data, PyCharm equips you with all the tools you need.

The best way to understand the difference between tools is to try them for yourself. We strongly recommend downloading PyCharm and testing it in your real-life projects. 

​

September 20, 2024:

What’s New in PyCharm 2024.2.2! 
​

PyCharm 2024.2.2 is here with many key updates, including Python support improvements, new Django features, and enhancements to the Data View tool window! 

Visit our What’s New page for more details on all these features and to explore many others. You can download the latest version from our download page or update your current version through our free Toolbox App

Download PyCharm 2024.2.2

PyCharm 2024.2.2 highlights

Django enhancements  PRO

New code completion suggestions

When working with models, PyCharm now offers field completion suggestions in a variety of cases, such as Model.save(update_fields[…]), Model.refresh_from_db(fields=[…]), Model.clean_fields(exclude=[…]), and so on.

Quick-fix to create a method for an unresolved ViewSet

If a ViewSet has an unresolved reference, PyCharm suggests a quick-fix to introduce the missing method. Use Alt + Enter to call it.

What’s new!

Data View  PRO

You can now look at n-dimensional NumPy arrays in the Data View tool window. Define the array you would like to inspect, along with a specific dimension or slice, in a special field at the bottom of the tool window, and PyCharm will display a table with the results.

Python support improvements

Support for default types for type parameters (PEP 696)

Improve typing with PyCharm’s support for the Python 3.13 ability to define the default types for type parameters. The IDE now incorporates default types for type parameters both for old-style and new-style generic classes, functions, and type aliases, and it takes them into account in type inference.

Pattern matching: Foldable match statements

To improve the readability of code with large pattern-matching statements, you can now use folding for entire match statements or for separate cases inside them. 

Download PyCharm 2024.2.2

Visit our What’s New page to learn about other useful features included in this release, or read the release notes for the full breakdown, including more details on the features mentioned here. 

If you encounter any problems, please report them in our issue tracker so we can address them promptly. 

Connect with us on X (formerly Twitter) to share your thoughts on PyCharm 2024.2.2! 

​

September 12, 2024:

Introducing the New TeamCity Plugin for IntelliJ IDEA

We’re excited to announce the release of the updated TeamCity plugin for IntelliJ IDEA! 🎉 You can now download it directly from JetBrains Marketplace.

Using the plugin, you can trigger TeamCity builds directly from within your IDE and test any changes before committing them to the version control system.

Why get the new plugin?

This plugin has been built from the ground up to ensure it will eventually be able to replace the existing TeamCity plugin once support for the most frequently used and requested features has been added. 

Here’s what’s new in the plugin:

  • We’ve added functionality enabling you to link TeamCity projects and build configurations to your IDE project so that you only see build configurations related to your IDE project.

  • With the help of the remote run feature, you can run build configurations on your local changes without committing them to the VCS.

  • The plugin’s tool window now contains a new Personal Builds tab where past personal builds are listed. It also shows live updates of all builds executed using the remote run feature.

  • Now it’s possible to select a build configuration and watch its build status for each commit in the VCS Log tool window.

Key benefits of this updated plugin include:

  • The ability to manually configure which TeamCity projects relate to your code, giving you more control over your builds.

  • Enhanced performance that significantly reduces lag between your actions in the IDE and the TeamCity server’s response.

We’re actively developing this plugin and planning to add even more features in upcoming releases. Your feedback is critical in shaping the tool to better meet the needs of IntelliJ IDEA developers. 

You can install both the old and new plugin versions side by side, so feel free to compare and explore!

How to get started with the TeamCity plugin for IntelliJ IDEA

Initial setup

1. Download the plugin from Marketplace.

2. Once the plugin is installed, open your project in IntelliJ IDEA and invoke the plugin’s settings using the Tools | TeamCity (Experimental) | Settings… menu.

3. Click Log In and enter the following values:

  • Server URL – the HTTP(S) address of your TeamCity server.

  • Access token – your user access token that can be generated on the Your Profile | Access Tokens page in TeamCity.

With the new plugin, you can link build configurations from TeamCity directly to the project you have open. In the old plugin, this had to be configured through VCS roots, which wasn’t an easy process.

Now, users only need to create a given configuration once, and it will be saved in the source code. Everyone who downloads the project will then have it automatically configured and available without the need to set it up themselves.

Testing your local changes

One of the key benefits of the TeamCity IDEA plugins (both old and new) is the ability to run builds with your local changes before they are pushed to a remote branch, also known as a remote run. This allows you to spot issues without breaking the build for everyone else on your team.

Here’s how you can initiate a remote run from your IDE.

1. Make some changes to your code.

2. Go to Tools | TeamCity (Experimental) | Remote Run….

3. Then, under Remote Run… | Settings…, click the target build configurations that you want to run with your local changes. The plugin will then remember your choice and run builds for the same configuration(s) on subsequent remote runs. You can configure these project-configuration relations in the plugin settings.

Link your projects to TeamCity build configurations

Setting up project-configuration relations allows you to explicitly choose which configurations should be triggered depending on the introduced changes.

TeamCity’s IntelliJ IDEA integration enables you to choose the linking scope, selecting whether you want to link the whole project or only individual project modules to your TeamCity build configurations.

1. Click Tools | TeamCity (Experimental) | Settings… to open the plugin’s settings.

2. Choose the required Linking scope value:

  • PROJECT – allows you to link the entire IntelliJ IDEA project to the target build configuration(s). This option works best when you need to trigger builds of the same configuration(s) regardless of which part of your code changed.

  • MODULE – allows you to link individual modules to corresponding build configurations. For example, you can run both Build and Test configurations if the main module of your application changes, and only the Test configuration if you edit a separate module with unit and functional tests. This mode also benefits mono repositories where each module is a separate project with its own target build configuration(s).

Share your feedback

We’re still working on making the new plugin ready to replace the old one. For the time being, you can download both plugins – they won’t interfere with each other.

Is there any functionality that you’d like us to add to the new plugin? Let us know in the comments below! We want to make the plugin as useful as possible, and your feedback can help us do exactly that.

​​​​​​​

​

Sep 24, 2024
Sep 20, 2024
Sep 12, 2024
Dec 9, 2024
Dec 2, 2024
Nov 18, 2024
Nov 13, 2024
Nov 6, 2024
Q-Oct 9, 2024
bottom of page