The Era of Python Has Come - Should You Jump on the Bandwagon?

Python is not a new programming language by any means. It was released way back in 1991. While it has always been a popular language, it has become much more prominent in recent years. According to a report by RedMonk, it is the second most popular programming language in the world, next to JavaScript.

Many large companies have been using this language for years, and an increasing number of organizations are adopting it with every passing year. Many industry leaders, such as Google, Instagram, YouTube, and Reddit, use Python. Most cite the language’s versatility and ease of use as reasons for its popularity and widespread adoption.

For companies that are not Python-based, the recent surge and hype surrounding the language may change your thinking about your tech stack. Should you consider adopting Python for your next project?

In this article, we will dig a little deeper into why Python has become so popular, and whether you should consider including it in your tech stack.

Why Python is Popular

Python is also known as the Swiss army knife of coding because of its versatility. Developers prefer Python because of its built-in data structures, ocean of libraries and packages, frameworks, and straightforward syntax. With these tools, solving complicated problems and creating scalable websites and applications become quite easy.

Let’s dig into some of these reasons to understand why this language is so popular.

Easy code maintenance

Python enables developers to write applications using less LOC (Lines Of Code) than most other programming languages, making it easier to maintain the application code. The syntax rules of Python are composed mainly of plain English keywords, which improves code readability. Moreover, Python follows a clean and structured code base.

Here’s a quick comparison of LOC required to print “Hello Contentstack” in two different languages.

Java

Python

public class HelloWorld{
           public static void main(String
[ ]args){
                 System.out.println("Hello Contentstack!");
           }
}

print("Hello Contentstack!")

Open Source Framework and Tools

Python, being an open-source programming language, accelerates the development process and saves cost for the developer. Python’s popular frameworks and toolkits, such as PyQT, WxPython, PyJs, PyGTK, PyGUI, and Kivy, help develop scalable applications quickly.

Leveraging Data Science

Python has many perks when it comes to Data Science. With its extensive range of libraries, you can form complex algorithms for data and image processing. Since Data Science is a collection of Machine Learning, Deep Learning, and Data Analytics, Python offers libraries for each of these domains. We will learn more about these below.

Flexibility and Diversity

Few high-level languages support as many AI and ML paradigms as Python does. It supports subject-oriented and structured programming, aspect-oriented programming, dynamic type system, and automatic memory management. This level of diversity is rare to find in a single language.

When To Use Python

Python offers numerous powerful libraries and packages which can help meet developer and business requirements. Businesses need a reliable web application that can help them distribute information and predict data and uncover more insights. On the other hand, developers look for a versatile framework with which they can build robust applications. And for all these needs, Python is just about ideal.

Let's look at a few applications where Python (along with the respective frameworks and libraries) can be advantageous:

Web development

Python offers several frameworks that allow you to build secure, scalable websites conveniently.

Frameworks: Django, Flask, and Pyramid

These frameworks provide the functionality to carry out some important operations, such as:

  • Input form validation
  • URL routing
  • Generating HTML, JSON, XML data with a templating engine
  • Web security against malicious attacks

Artificial Intelligence (AI)

AI is a vast domain that includes Machine Learning, Deep Learning, and more. The motive behind AI is to program a machine to simulate human behavior. The booming industry of AI has increased the demand for Python to perform AI-related computations. To make AI computations easier and convenient, you can use the following libraries:

Libraries: NumPy, pandas, scikit-learn, etc.

These libraries provide inbuilt methods that help solve complex problems easily.

Data Analysis and Visualization

Data is the core of many businesses, and if used wisely, it allows you to drive enormous value for customers. The popular online streaming platform, Netflix, uses data analysis to identify customer preferences and viewing patterns. This enables them to segment users to provide personalized recommendations.

Some frameworks, like Pandas and NumPy, help when extracting critical information from the given data.

Data Analysis can hardly make an impact if developers cannot visualize it. Data Visualization helps in understanding the pattern of data by plotting points using suitable graphs. To plot the data and make more sense of it, you can use the Matplotlib and Seaborn libraries.

Game Development

Libraries such as PySoy (3D game engine for Python) and PyGame support the development of interactive games. These libraries help in developing professional games with ease. Some of the most popular games today, such as “Civilization-IV,” “Disney’s Toontown Online,” “Vega Strike,” “Battlefield 2,” and “World of Tanks,” are built using Python libraries.

Web Scraping

Web scraping is the process of extracting or “scraping” data from a website. You can use Python to pull a large amount of data from websites, which can be helpful in price comparison, market research, job listings, and research and development.

Libraries: Requests, Ixml, Beautiful Soap, Selenium

Framework: Scrapy

And the list doesn’t end here. Python has many other applications due to the advantages mentioned above. It is undoubtedly more robust in certain areas (especially AI, ML, and IoT) than most other languages. If you have upcoming projects similar to the use cases above, it might be time to incorporate Python into your development.

Read More

Introducing Postman Collections for Contentstack APIs

We’re excited to announce the release of the Postman Collections for our Content Delivery, Content Management, and GraphQL APIs. These collections let you connect to your Contentstack accounts and try out our APIs.

This article introduces you to Postman and walks you through the process of testing Contentstack APIs using the newly-released Postman collections.

About Postman

Postman is a popular API client that lets you build and test APIs through its easy-to-use GUI. With Postman, it’s easy to write test cases, inspect responses, and do a lot more.

About Contentstack Postman collections

Contentstack Postman collection is a set of APIs that you can import and start using instantly. We have released collections for our Content Delivery, Content Management, and GraphQL APIs. These collections come with predefined environment variables to help you get started immediately.

The process to start using Contentstack Postman collections is relatively simple:

  1. Download Latest Contentstack Postman Collections
  2. Configure Environment Variables
  3. Make an API Request in Postman
  4. Update Your Postman Collection

Let’s look at them in detail.

Step 1 – Download Latest Contentstack Postman Collections

It is imperative to have the latest desktop version of Postman to use Contentstack Postman collections. After installing Postman, you can download the latest versions of the Postman collection.

To download and import the collection into your Postman app, click on the respective Run in Postman button:

Content Delivery API

Content Management API

GraphQL API

Note: For Windows, downloading the Content Management API collection doesn't download the environment automatically due to the larger size of the environment file. Consequently, Windows users need to download the Content Management API - Environment file manually and import it in their Postman environment.

These collections cover all the Content Delivery, Content Management, and GraphQL API endpoints for Contentstack.

You can even download the Postman collection from our GitHub page. You can follow the instructions given in the Postman collection Readme file for more details.

Step 2 – Configure Environment Variables

When you download and import the Postman collection, you download and import the related environment as well.

Set your Contentstack account-specific values for the required environment variable. If needed, you can add your own environment variables.

As the environment variables are referenced across multiple API requests, once you set the variables, it becomes more convenient to make repeated use of the Postman collection.

Adding Environment Variables

To add and set up your Postman environment variables, perform the following steps:

  1. Identify the environment variable(s) that you want to define.
  2. In Postman, click on the “Manage Environments” settings cog at the top-right corner.
  3. Click on the Environment in which you want to add your variable(s).
  4. Under the VARIABLE field, enter the name of the environment variable, for example, api_key and add in the INITIAL VALUE field. Enter your Contentstack account-specific value that will replace the variable when making the request.
  5. Once you have defined all the required environment variables, click on Update.

Note: We strongly advise against storing your API keys and tokens in your collection permanently. If you or someone else shares the collection by mistake, other users will be able to export it along with these keys. We recommend that you provide your Contentstack account-specific API keys and tokens in your environment or directly to the sample requests.

Adding Authentication Details in Environment

For users who use authtoken to authenticate their requests, when you make the Log in to your account API Request, your authtoken will be saved in cookies and automatically added to your collection variable.

If you want to discontinue this behavior, you need to whitelist this domain, and then you will be able to access cookies of this domain in scripts programmatically.

Note: To avoid this, we recommend using the stack’s Management Token along with the stack API key to make valid Content Management API requests. For more information, refer to Authentication.

Updating Your Environment

With every new API request added, you need to update your environment file. To get the latest environment variables, download the collection along with the updated environment file again, compare your existing environment with the latest environment, and add the new variables to your existing environment. Once you set up your environment, you are ready to make your API requests.

Step 3 – Make an API Request in Postman

With the Contentstack Postman collection and the environment loaded into Postman, you can now make API requests to the Contentstack API via Postman.

To make an API request, perform the following steps:

  1. Select the environment with which you want to work.
  2. Select an API Request from the Contentstack Postman collection.

Note: If you want to make changes to your variable, you can do it here.

  1. Next, click on Send at the top right to make the API request.

The API request should return with a response under the Body tab in the bottom half of the screen.

Need some more help getting started? Check out the documentation of CDA Postman Collection, CMA Postman Collection, and GraphQL Postman Collection for step-by-step instructions and helpful screenshots.

Step 4 – Update Your Postman Collection

To keep your Postman collection up-to-date, download the latest version of the Postman collection along with the updated environment again, and you are good to go.

Watch the GitHub Channel for Updates

You can also choose to watch for the latest Postman collection updates on our GitHub repository and get notifications of new releases or updates to the repository. To do so, click on the Watch button at the top-right corner of the page and select Watching.

The GitHub Readme doc will help you with the steps that you need to follow.

More Information

For more information on the Postman collections, you can refer the following articles:


Update: This blog post has been updated on July 29th, 2020 to accommodate the changes related to the release of our GraphQL API Postman collection.

Read More

Deploying AWS Lambda Code In Different Environments

The beauty of AWS Lambda is that it allows you to deploy and execute code without needing any physical server. In other words, you can set up “serverless” architecture using AWS Lambda.

When deploying your code, you may want to deploy it in various environments (such as testing, development, and production), so that you can test your code before going live.

This article takes you through the steps of setting up Lambda functions in different environments by using the AWS Lambda function alias and versioning. By the end of this blog, we will have learned how to do the following:

  • Create an alias for AWS Lambda function
  • Associate the AWS Lambda function alias with the AWS API Gateway stage
  • Secure the AWS API Gateway stage with API Key and Rate Limiting
  • Reassociate versions with the alias

These are the essentials for managing different environments within the serverless system and for the smooth rolling out of releases.

Create an Alias for AWS Lambda Function

The following section shows you how to create environment aliases for a Lambda function and associate them with function versions. We will create two aliases for the development and production environments as an example.

Create a Lambda function

To create a Lambda function, follow the steps below:

  1. Login to your AWS Management Console, select AWS Lambda service from the Service list.
  2. Click on the Create function button, and then the Author from scratch option.
  3. Configure the lambda based on your requirements. Choose Node.js 12.x as your run-time language and click on the Create function button.

creating-function-aws-lambda.png

Publish Version

After creating a Lambda function, you can publish the version as follows:

  1. Go to the newly created version.
  2. Select Publish new version from the “Actions” drop-down menu:

publish-new-version-aws-lambda.png

  1. Publish the new version with an appropriate version description, for example, “Initial Stable Version.”

version-description-aws-lambda.png

Create an Alias for Each Environment

The next step is to create an alias for each environment, as shown below:

  1. From the Actions drop-down menu, select Create a new alias.
  2. Add the alias Name, Description, and Version, as shown in the following example:

create-alias-and-assign-a-version-aws-lambda.png

Note: You have to perform this step twice, first for the development environment and second for the production environment.

  1. The version for the development environment should be $LATEST and for the production environment, it should be 1 which we created in the Publish Version section above.

resulting-alias-and-version-aws-lambda.png

These steps ensure that a specific version of the Lambda function is assigned to the production environment, and the latest version is assigned to the development environment.

Associate the AWS Lambda Function Alias With the AWS API Gateway Stage

This section shows how to create a new AWS Lambda API Gateway and associate the production and environment stages with the respective Lambda function aliases.

Create a New REST API

REST APIs help the Lambda function to fetch and send data. So, the next step is to create a REST API using the steps given below:

  1. Log in to AWS Management Console and select API Gateways from the services list.
  2. Click on the Create API button.
  3. On the Choose an API type page, go to the REST API option (the public one) and click on Build.
  4. On the next page, ensure that the Choose the protocol section has REST checked, and the Create new API section has New API checked. Enter the API name in the Settings section and click on Create API.

create-api-gateway-aws-lambda.png

  1. Now add a new resource to the API gateway as shown in the following screenshot:

create-api-resource-aws-lambda.png

  1. Add a simple method of the required type. In this example, we have created a GET method for "/demo-resource".

create-method-and-associate-function-aws-lambda.png

Deploy and Add Stage Variable

Now that the API is created, let’s deploy it.

  1. Deploy the API in two new stages and name them development and production.
  2. In the Stages tab of API Gateway, navigate to each deployed stage and add the stage variable with key as lambdaAlias and the name of the stage as value.

create-stage-and-add-stage-variable.png

Associate Stage With Lambda Alias

  1. In the Resources tab of API Gateway, select the REST API method that we created in the above.

associate-lambda-function.png

  1. Click on Integration Request, which is associated with Mock by default.
  2. Select the Lambda region in the Integration Request section and set name to the name of the Lambda function followed by :${stageVariables.lambdaAlias}. In our example, we have named it demo-function:${stageVariables.lambdaAlias} as shown below:

select-lambda-function.png

  1. After clicking Save, a prompt appears with a CLI command and instructions to add permission for each stage. Execute the command from AWS CLI by replacing :${stageVariables.lambdaAlias} with each stage name.
  2. After you execute the command, your Lambda stages successfully attach to the respective Lambda function aliases.

Securing AWS API Gateway Stage With API Key and Rate Limiting

The following section shows how to add the API key security to the API gateway and apply appropriate rate-limiting to safeguard our respective environments.

Create Usage Plan

  1. Create an API usage plan for development and production environments with appropriate throttling and quota for the respective stage.

create-usage-plan-aws-lambda.png

  1. Note: For more information on the rate-limiting algorithm, read the following guide: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html.
  2. Now associate the API gateway stage with the respective usage plan while creating. In our use case, we associate the development usage plan with the development stage of the API gateway.

associate-usage-plan-with-stage-aws-lambda.png

Create and Add Api Key to Usage Plan

  1. Select each usage plan and go to the API Keys tab.

select-api-keys-tab-aws-lambda.png

  1. Click on Create an API Key and add it to the Usage Plan and add the API key.

create-api-key-for-usage-plan-aws-lambda.png

  1. After creating the API key, click on the newly created API key from respective usage plans.

Make API Key Mandatory For Resource

  1. Go to the Resource Method created for the API.
  2. Within the Method Execution section, click on Method Request.
  3. From the Settings section, set API Key Required drop-down to true.

make-api-mandatory.png

  1. Deploy the API gateway to both the created stages for respective environments.

After completing the above steps, you will require the respective API keys to access the stages of the API gateway associated with different aliases.

Reassociating Version With Alias

This section demonstrates how you can update the alias version for a Lambda function, which you can use to either associate a new version to a production alias or revert code to an earlier version.

Publish New Version

  1. Publish a new version of the latest Lambda function when development is completed.
  2. This step is optional if you are trying to revert the code to an earlier version.

Switch Alias

Select production alias from the Qualifiers drop-down of the AWS Lambda function.

switch-alias-from-qualifiers-tab-aws-lambda.png

Update Alias Version

  1. After switching alias from the drop-down, scroll down to the Aliases section of the Lambda function and select a new version for the alias.

select-version-for-alias-aws-lambda.png

  1. Click on Save to complete the reassociation of the version with the alias.

The above steps should help you set up multiple environments in AWS Lambda. We can ensure that the development URL always gets the result from the latest Lambda function and the production environment is bound to one fixed version of the Lambda function.

Read More

Why and When to Use GraphQL

REST is an API design architecture that has become a norm for implementing web services in the last few years. It uses HTTP to get data and perform various operations (POST, GET, PUT, and DELETE) in JSON format, allowing better and faster parsing of data.

However, like all technologies, REST API comes with some limitations. Here are some of the most common ones:

  • It fetches all data, whether required or not (aka “over-fetching”).
  • It makes multiple network requests to get multiple resources.
  • Sometimes resources are dependent, which causes waterfall network requests.

To overcome these, Facebook developed GraphQL, an open-source data query and manipulation language for APIs. Since then, GraphQL has gradually made its way into the mainstream and has become a new standard for API development.

GraphQL is a syntax for requesting data. It’s a query language for APIs. The beauty, however, lies in its simplicity. It lets you specify precisely what is needed, and then it fetches just that — nothing more, nothing less.

And it provides numerous other advantages.

The following covers some of the most compelling reasons to use GraphQL and looks at some common scenarios where GraphQL is useful.

Why Use GraphQL?

Strongly-Typed Schema

All the data types (such as Boolean, String, Int, Float, ID, Scalar) supported by the API are specified in the schema in the GraphQL Schema Definition Language (SDL), which helps determine the data that is available and the form in which it exists. This strongly-typed schema makes GraphQL less error-prone and provides additional validation. GraphQL also provides auto-completion for supported IDEs and code editors.

Fetch Only Requested Data (No Over- or Under-Fetching)

With GraphQL, developers can fetch exactly what is required. Nothing less, nothing more. The ability to deliver only requested data solves the issues that arise due to over-fetching and under-fetching.

Over-fetching happens when the response fetches more than is required. Consider the example of a blog home page. It displays the list of all blog posts (just the title and URLs). However, to present this list, you need to fetch all the blog posts (along with body data, images, etc.) through the API, and then show just what is required, usually through UI code. This over-fetching impacts your app’s performance and consumes more data, which is expensive for the user.

With GraphQL, you define the fields that you want to fetch (i.e., Title and URL, in this case), and it fetches the data of only these fields.

Under-fetching, on the other hand, is not fetching adequate data in a single API request. In this case, you need to make additional API requests to get related or referenced data. For instance, while displaying an individual blog post, you also need to fetch the referenced author’s profile entry so that you can display the author’s name and bio.

GraphQL handles this well. It lets you fetch all relevant data in a single query.

Saves Time and Bandwidth

GraphQL allows multiple resource requests in a single query call, which saves time and bandwidth by reducing the number of network round trips to the server. It also helps to prevent waterfall network requests, where you need to resolve dependent resources on previous requests. For example, consider a blog’s homepage where you need to display multiple widgets, such as recent posts, the most popular posts, categories, and featured posts. With REST architecture, displaying these would take at least five requests, while a similar scenario using GraphQL requires just a single GraphQL request.

Schema Stitching for Combining Schemas

Schema stitching allows combining multiple, different schemas into a single schema. In a microservices architecture, where each microservice handles the business logic and data for a specific domain, this is very useful. Each microservice can define its GraphQL schema, after which you use schema stitching to weave them into one schema accessible by the client.

Versioning is Not Required

In REST architecture, developers create new versions (e.g., api.domain.com/v1/, api.domain.com/v2/) due to changes in resources or the request/response structure of the resources over time. Hence, maintaining versions is a common practice. With GraphQL, there is no need to maintain versions. The resource URL or address remains the same. You can add new fields and deprecate older fields. This approach is intuitive as the client receives a deprecation warning when querying a deprecated field.

Transform Fields and Resolve with Required Shape

A user can define an alias for fields, and each of the fields can be resolved into different values. Consider an image transformation API, where a user wants to transform multiple types of images using GraphQL. The query looks like this:

 

a
 

query  {
      images {
                title
                thumbnail: url(transformation: {width: 50, height: 50})
                original: URL,
                low_quality: url(transformation: {quality: 50})
                file_size                
                content_type
      }
}

Apart from the advantages listed above, there are a few other reasons why GraphQL works well for developers.

  • GraphQl Editor or GraphQL Playground where entire API designs for GraphQL will be represented from the provided schema
  • The GraphQL ecosystem is growing fast. For example, Gatsby, the popular static site generator, uses GraphQL along with React.
  • It’s easy to learn and implement GraphQL.
  • GraphQL is not limited to the server-side; it can be used for the frontend as well.

When to use GraphQL?

GraphQL works best for the following scenarios:

  • Apps for devices such as mobile phones, smartwatches, and IoT devices, where bandwidth usage matters.
  • Applications where nested data needs to be fetched in a single call. For example, a blog or social networking platform where posts need to be fetched along with nested comments and details about the person commenting.
  • A composite pattern, where an application retrieves data from multiple, different storage APIs. For example, a dashboard that fetches data from multiple sources such as logging services, backends for consumption stats, and third-party analytics tools to capture end-user interactions.

graphql-rest-api-diagram.jpeg

  • Proxy patterns on the client side; GraphQL can be added as an abstraction on an existing API so that each end-user can specify response structure based on their needs. For example, clients can create a GraphQL specification according to their needs on a common API provided by FireBase as a backend service.

proxy-pattern-graphql-diagram.jpeg

In this article, we examined how GraphQL is transforming the way apps are managed and why it’s the technology of the future. Here are some helpful resources to help you get started with GraphQL:

Read More