Saturday, April 27, 2024
No menu items!
HomeDatabase ManagementVisualize your AWS Infrastructure with Amazon Neptune and AWS Config

Visualize your AWS Infrastructure with Amazon Neptune and AWS Config

As an organization, you run critical applications on AWS, and the infrastructure that runs those critical applications can be spread across different accounts and have complex relationships. When you want to understand the landscape of your existing setup, it can seem daunting to go through lists of resources and try to understand how the resources are connected. It would be useful if you had an easier way to visualize everything. As customers continue to migrate mission-critical workloads on to AWS, their cloud assets continue to grow. Getting a holistic, contextual view of your cloud inventory is becoming critical to achieve operational excellence with your workloads.

A good understanding and visibility into cloud assets allows you to plan, predict, and mitigate any risk associated with their infrastructure. For example, you should have visibility into all the workloads running on a particular instance family. If you decide to migrate to a different instance family, a knowledge graph around all the workloads that would be affected can help you plan for this change and make the whole process seamless. To address this growing need of managing and reporting asset details, you can use AWS Config. AWS Config discovers AWS resources in your account and creates a map of relationships between AWS resources (as in the following screenshot).

In this post, we use Amazon Neptune with AWS Config to get an insight of our landscape on AWS and map out relationships. We also complement that with an open-source tool to visualize our data stored in Neptune.

Neptune is a fully managed graph database service that can store billions of relationships within highly connected datasets and query the graph with milliseconds latency. AWS Config enables you to assess, audit, and evaluate the configurations of your AWS resources. With AWS Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines.

Prerequisites

Before you get started, you need to have AWS Config enabled in your account and enable the stream for AWS Config so that any time a new resource is created, you get a notification of the resource and its relationships.

Solution overview

The workflow includes the following steps:

Enable AWS Config in your AWS account and set up an Amazon Simple Storage Service (Amazon S3) bucket where all the config logs are stored.
Amazon S3 Batch Operations uses AWS Lambda on an existing S3 bucket to populate the Neptune graph with the existing AWS Config inventory and build out the relationship map. AWS Lambda function is also triggered when a new AWS Config file is delivered to an S3 bucket and updates the Neptune database with all the changes.
User authenticates with Amazon Cognito and makes a call to an Amazon API Gateway endpoint
The static website calls an AWS Lambda function which is accessed through the proxy and exposed to the internet using Amazon API Gateway.
AWS Lambda function is used to query the graph in Amazon Neptune and passes the data back to the app to render the visualization.

The resources referred to in this post, including the code samples and html files are available in the amazon-neptune-aws-config-visualization GitHub repository.

Enable AWS Config in an AWS account

If you haven’t enabled AWS Config yet, you can set up AWS Config through the AWS Management Console.

If you have already enabled AWS Config, make note of the S3 bucket where all the configuration history and snapshot files are stored.

Set up a Neptune cluster

Your next step is to provision a new Neptune instance inside a VPC. For more information, see the Neptune user guide.

After you set up the cluster, note the cluster endpoint and port; you need this when inserting data into the cluster and querying the endpoint to display that using the open-source library VIS.js. VIS.js is a JavaScript library used for visualizing graph data. It has different components such as DataSet, Timeline, Graph2D, Graph3D, and Network for displaying data in various ways.

Configure a Lambda function to trigger when AWS Config delivers a file to an S3 bucket

After you set up the cluster, you can create a Lambda function to be invoked when AWS Config sends a file to Amazon S3.

Create a directory with the name configparser_lambda and run the following commands to install the packages for our function to use:

pip3.6 install –target ./package gremlinpython

pip3.6 install –target ./package requests

Create a file configparser_lambdafunction.py in the directory and open it in a text editor.
Enter the following into the configparser_lambdafunction.py file:

from __future__ import print_function
import boto3
import json
import os, sys
from io import BytesIO
import gzip
from gremlin_python import statics
from gremlin_python.structure.graph import Graph
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process.traversal import T
import requests
import urllib.parse

CLUSTER_ENDPOINT = os.environ[‘CLUSTER_ENDPOINT’]
CLUSTER_PORT = os.environ[‘CLUSTER_PORT’]

# Make the remote connection to Neptune outside for resuse across invocations
remoteConn = DriverRemoteConnection(‘wss://’ + CLUSTER_ENDPOINT + “:” + CLUSTER_PORT + ‘/gremlin’,’g’,)
graph = Graph()
g = graph.traversal().withRemote(remoteConn)

def run_sample_gremlin_websocket():
output = g.V().hasLabel(‘Instance’).toList()
return output
#remoteConn.close()

def run_sample_gremlin_http():
URL = ‘https://’ + CLUSTER_ENDPOINT + “:” + CLUSTER_PORT + ‘/gremlin’
r = requests.post(URL,data='{“gremlin”:”g.V().hasLabel(‘Instance’).valueMap().with_(‘~tinkerpop.valueMap.tokens’).toList()”}’)
return r

def get_all_vertex():
vertices = g.V().count()
print(vertices)

def insert_vertex_graph(vertex_id, vertex_label):
node_exists_id = g.V(str(vertex_id)).toList()
if node_exists_id:
return
result = g.addV(str(vertex_label)).property(T.id, str(vertex_id)).next()

def insert_edge_graph(edge_id, edge_from, edge_to, to_vertex_label, edge_label):
insert_vertex_graph(edge_to, to_vertex_label)

edge_exists_id = g.E(str(edge_id)).toList()
if edge_exists_id:
return

result = g.V(str(edge_from)).addE(str(edge_label)).to(g.V(str(edge_to))).property(T.id, str(edge_id)).next()

def parse_vertex_info(vertex_input):

# Vertex needs to have an id and a label before it can be inserted in Neptune
# id for the vertex, required field
id = vertex_input[‘resourceId’]

# label for the vertex, required field
label = vertex_input[‘resourceType’]

itemStatus = vertex_input[‘configurationItemStatus’]

if itemStatus == “ResourceDeleted”:
node_exists_id = g.V(str(id)).toList()
if node_exists_id:
result = g.addV(str(itemStatus)).property(T.id, str(id)).next()
return;
else:
insert_vertex_graph(id, label)
result = g.addV(str(itemStatus)).property(T.id, str(id)).next()
return;

insert_vertex_graph(id, label)

def parse_edge_info(edge_input):
itemStatus = edge_input[‘configurationItemStatus’]

if itemStatus == “ResourceDeleted”:
return;

# Edge needs to have id, from, to and label before it can be inserted in the Neptune
# Edge to is also a vertex
for index, item in enumerate(edge_input[‘relationships’]):

# from vertex
from_vertex = edge_input[‘resourceId’]

# to vertex
if “resourceId” in item:
to_vertex = item[‘resourceId’]
if “resourceName” in item:
to_vertex = item[‘resourceName’]

to_vertex_label = item[‘resourceType’]

# id is a concatenation of from and to, which makes it unique
id = from_vertex + ‘:’ + to_vertex

# label is the relationship
label = item[‘name’]
insert_edge_graph(id, from_vertex, to_vertex, to_vertex_label, label)

def lambda_handler(event, context):
#print(event[‘tasks’][0][‘s3BucketArn’])
#print(event[‘tasks’][0][‘s3Key’])

# Check the event source is S3 or the S3 Batch job
if (‘Records’ in event and event[‘Records’][0][‘eventSource’] == “aws:s3”):
if (event[‘Records’][0][‘s3’] and event[‘Records’][0][‘s3’][‘bucket’] and event[‘Records’][0][‘s3’][‘bucket’][‘name’]):
bucket = event[‘Records’][0][‘s3’][‘bucket’][‘name’]
if (event[‘Records’][0][‘s3’] and event[‘Records’][0][‘s3’][‘object’] and event[‘Records’][0][‘s3’][‘object’][‘key’]):
object = event[‘Records’][0][‘s3’][‘object’][‘key’]
elif (‘tasks’ in event and event[‘tasks’][0][‘s3BucketArn’]):
bucket = event[‘tasks’][0][‘s3BucketArn’].split(‘:::’)[1]
if (event[‘tasks’][0][‘s3Key’]):
object = event[‘tasks’][0][‘s3Key’]

# Use below for quick testing by passing the variables directly
#bucket = event[‘tasks’][0][‘s3BucketArn’]
#object = event[‘tasks’][0][‘s3Key’]
s3 = boto3.resource(“s3”)
if (object.endswith(‘.gz’)):
obj = s3.Object(bucket, urllib.parse.unquote(object))
with gzip.GzipFile(fileobj=obj.get()[“Body”]) as gzipfile:
content = json.loads(gzipfile.read())

for index, item in enumerate(content[‘configurationItems’]):
parse_vertex_info(item)
parse_edge_info(item)

#get_all_vertex()

return {
“statusCode”: 200,
“body”: json.dumps(‘Hello from Lambda! Real Test’)
}

Create a .zip archive of the dependencies:

cd package/
zip -r9 ${OLDPWD}/function.zip .

Add your function code to the archive:

cd $OLDPWD
zip -g function.zip configparser_lambdafunction.py

When the Lambda deployment package (.zip file) is ready, create the Lambda function using the AWS Command Line Interface (AWS CLI).For instructions to install and configure AWS CLI on your operating system, see Installing, updating, and uninstalling the AWS CLI.
After you install the AWS CLI, run aws configure to set the access_key, and AWS Region.
Run the following commands to create a Lambda function within the same VPC as the Neptune cluster. (The Lambda function needs an AWS Identity and Access Management (IAM) execution role to be able to create ENIs in the VPC for accessing the Neptune instance.)

aws iam create-role –path /service-role/ –role-name lambda-vpc-access-role –assume-role-policy-document ‘{ “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Principal”: { “Service”: “lambda.amazonaws.com” }, “Action”: “sts:AssumeRole” }]}’

Attach the following policy to the role:

aws iam attach-role-policy –role-name lambda-vpc-access-role –policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaENIManagementAccess

aws iam attach-role-policy –role-name lambda-vpc-access-role –policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

Create the Lambda function using the deployment package and IAM role created in previous steps. (Use subnet-ids from the VPC in which the Neptune cluster is provisioned.)

aws lambda create-function –function-name <lambda-function-name>
–role “arn:aws:iam::<aws-account-number>:role/service-role/lambda-vpc-access-role”
–runtime python3.6 –handler configparser_lambdafunction.lambda_handler
–description “Lambda function to parse AWS Config and make gremlin calls to Amazon Neptune”
–timeout 120 –memory-size 256 –publish
–vpc-config SubnetIds=<subnet-ids>,SecurityGroupIds=<sec-group-id>
–zip-file fileb://function.zip
–environment Variables=”{CLUSTER_ENDPOINT=<your-neptune-cluster-endpoint>,CLUSTER_PORT=<your-neptune-db-port>}”

After you create the function, add the S3 bucket as the trigger for the Lambda function.

The S3 bucket is where all the AWS Config files are sent and only for the .gz file extension.

Use Lambda with S3 Batch Operations

If your AWS Config was turned on from before, you need to ingest all the existing resource data into the Neptune cluster that you created. To do so, you need to set up S3 Batch Operations. S3 Batch Operations allow you to invoke Lambda functions to perform custom actions on objects. In this use case, you use the function to read all the existing files in your AWS Config S3 bucket and insert data into the Neptune cluster. For more information, see Performing S3 Batch Operations.

You need to specify a manifest for the S3 Batch Operations. In this setup, you need to have enabled an Amazon S3 inventory report for your bucket where AWS Config sends all the files. For instructions, see Configuring Amazon S3 inventory. Make sure to use CSV output format for the inventory.

After the inventory is set up and the first report has been delivered (which can take up to 48 hours), you can create an S3 Batch Operations job to invoke the Lambda function.

On the S3 Batch Operations console, choose Create job.
For Region, choose your Region.
For Manifest format, choose S3 inventory report.
For Manifest object, enter the location of the file.
Choose Next.
In the Choose operation section, for Operation type, select Invoke AWS Lambda function.
For Invoke Lambda function, select Choose from function in your account.
For Lambda function, choose the function you created.
Choose Next.
In the Configure additional options section, for Description, enter a description of your job.
For Priority, choose your priority.
For Completion report, select Generate completion report.
Select All tasks.
Enter the bucket for the report.
Under Permissions, select Choose from existing IAM roles.
Choose the IAM role that grants the necessary permissions (a role policy and trust policy that you can use are also displayed).
Choose Next.
Review your job details and choose Create job.

The job enters the Preparing state. S3 Batch Operations checks the manifest and does some other verification, and the job enters the Awaiting your confirmation state. You can select it and choose Confirm and run, which runs the job.

Create a Lambda function to access data in the Neptune cluster for visualization

After the data is loaded in Neptune, you need to create another Lambda function to access the data and expose it via RESTful interface through API Gateway.

Run the following commands:

mkdir visualizeneptune
cd visualizeneptune

Save the file as visualizeneptune.js in the preceding directory.

const gremlin = require(‘gremlin’);

exports.handler = async event => {
const {DriverRemoteConnection} = gremlin.driver;
const {Graph} = gremlin.structure;
// Use wss:// for secure connections. See https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-ssl.html
const dc = new DriverRemoteConnection(
`wss://${process.env.NEPTUNE_CLUSTER_ENDPOINT}:${process.env.NEPTUNE_PORT}/gremlin`,
{mimeType: ‘application/vnd.gremlin-v2.0+json’}
);
const graph = new Graph();
const g = graph.traversal().withRemote(dc);
const withTokens = ‘~tinkerpop.valueMap.tokens’;

try {
let data = [];
const {
resource_id, resource_label
} = event.queryStringParameters || {};

if(event.pathParameters.proxy.match(/searchbyid/ig)) {
data = await g.V().has(‘~id’, resource_id)
.limit(20)
.valueMap()
.with_(withTokens)
.toList();
} else if (event.pathParameters.proxy.match(/searchbylabel/ig)) {
data = await g.V()
.hasLabel(resource_label)
.limit(1000)
.valueMap()
.with_(withTokens)
.toList();
} else if (event.pathParameters.proxy.match(/neighbours/ig)) {
data[0] = await g.V().has(‘~id’, resource_id)
.out()
.valueMap()
.with_(withTokens)
.limit(10)
.toList();
data[1] = await g.V().has(‘~id’, resource_id)
.outE()
.limit(20)
.toList();
}

console.log(data);
dc.close();
return formatResponse(data);
} catch (error) {
console.log(‘ERROR’, error);
dc.close();
}
};

const formatResponse = payload => {
return {
statusCode: 200,
headers: {
‘Access-Control-Allow-Origin’: ‘*’,
‘Access-Control-Allow-Methods’: ‘OPTIONS, POST, GET’,
‘Access-Control-Max-Age’: 2592000, // 30 days
‘Access-Control-Allow-Headers’: ‘*’,
‘Content-Type’: ‘application/json’
},
body: JSON.stringify(payload)
};
};

Create a file package.json in the directory where you saved the preceding file and add the following dependencies, which are required by the Lambda function in the file:

{
“dependencies”: {
“gremlin”: “3.4.6”
}
}

Run the following in the directory that you saved the file in:

npm install
zip lambdapackage.zip -r node_modules/ visualizeneptune.js

When the Lambda deployment package (.zip file) is ready, we can create the Lambda function using the AWS CLI.

Run the following commands to create a Lambda function within the same VPC as the Neptune cluster. (We create the Lambda function using the deployment package and IAM role created earlier, and use subnet-ids from the VPC in which the Neptune cluster is provisioned.)

aws lambda create-function –function-name <lambda-function-name>
–role “arn:aws:iam::<aws-account-number>:role/service-role/lambda-vpc-access-role”
–runtime nodejs10.x –handler indexLambda.handler
–description “Lambda function to make gremlin calls to Amazon Neptune”
–timeout 120 –memory-size 256 –publish
–vpc-config SubnetIds=<subnet-ids>,SecurityGroupIds=<sec-group-id>
–zip-file fileb://lambdapackage.zip
–environment Variables=”{NEPTUNE_CLUSTER_ENDPOINT=<your-neptune-cluster-endpoint>,NEPTUNE_PORT=<your-neptune-db-port>}”

We recommend you go through the Lambda function source code at this point to understand how to query data using Gremlin APIs and how to parse and reformat the data to send to clients.

Create and configure API Gateway with a proxy API

We expose the Lambda function created in the earlier step through API Gateway Proxy API. For more information, see Set up Lambda proxy integrations in API Gateway.

Create the RESTful API using the following command from the AWS CLI:

aws apigateway create-rest-api –name lambda-neptune-proxy-api –description “API Proxy for AWS Lambda function in VPC accessing Amazon Neptune”

Note the value of the id field from the earlier output and use it as the <rest-api-id> value in the following code:

aws apigateway get-resources –rest-api-id <rest-api-id>

Note the value of the id field from the earlier output and use it as the <parent-id> value in the following command, which creates a resource under the root structure of the API:

aws apigateway create-resource –rest-api-id <rest-api-id> –parent-id <parent-id> –path-part {proxy+}

Note the value of the id field from the output and use it as the <resource-id> in the following command:

aws apigateway put-method –rest-api-id <rest-api-id> –resource-id <resource-id> –http-method ANY
–authorization-type NONE

So far, we created an API, API resource, and methods for that resource (GET, PUT, POST, DELETE, or ANY for all methods). We now create the API method integration that identifies the Lambda function for which this resource acts as a proxy.

Use the appropriate values obtained from the previous commands in the following code:

aws apigateway put-integration –rest-api-id <rest-api-id>
–resource-id <resource-id> –http-method ANY –type AWS_PROXY
–integration-http-method POST
–uri arn:aws:apigateway:<aws-region-code>:lambda:path/2015-03-31/functions/arn:aws:lambda:<aws-region-code>:<aws-account-number>:function:<lambda-function-name>/invocations

Deploy the API using the following command:

aws apigateway create-deployment –rest-api-id <rest-api-id> –stage-name test

For API Gateway to invoke the Lambda function, we either need to provide the execution role to the API integration or we add the permission (subscription) in Lambda explicitly that says that the API can invoke a Lambda function. This API Gateway subscription is also reflected on the console.

Run the following command to add the API Gateway subscription and permission to invoke the Lambda function:

aws lambda add-permission –function-name <lambda-function-name>
–statement-id <any-unique-id> –action lambda:*
–principal apigateway.amazonaws.com
–source-arn arn:aws:execute-api:<aws-region-code>:<aws-account-number>:<rest-api-id>/*/*/*

We have now created an API Gateway proxy for the Lambda function. To configure authentication and authorization with the API Gateway, we can use Amazon Cognito as described in the documentation.

Configure an S3 bucket to host a static website and upload the HTML file

Now that we have all the backend infrastructure ready to handle the API requests getting data from Neptune, let’s create an S3 bucket to host a static website.

Run the following commands to create an S3 bucket as a static website and upload visualize-graph.html into it:

–create Amazon S3 bucket with public read access
aws s3api create-bucket –bucket <bucket-name> –acl public-read –region <aws-region-code> –create-bucket-configuration LocationConstraint=<aws-region-code>

–configure website hosting on S3 bucket
aws s3api put-bucket-website –bucket <bucket-name> –website-configuration ‘{
“IndexDocument”: {
“Suffix”: “visualize-graph.html”
},
“ErrorDocument”: {
“Key”: “visualization-error.html”
}
}’

Upload the HTML file to Amazon S3

The main.js file has to be updated to reflect the API Gateway endpoint that we created in the previous steps.

Run the following commands to replace the value of PROXY_API_URL with the API Gateway endpoint. You can obtain the value of the URL on the API Gateway console—navigate to the API and find it listed as Invoke URL in the Stages section. You can also construct this URL using the following template.

https://<rest-api-id>.execute-api.<aws-region-code>.amazonaws.com/<stage-name>

When you run the following commands, make sure to use the escape character in URLs.

For Linux, use the following code:

sed -i -e ‘s/PROXY_API_URL/<API-Gateway-Endpoint>/g’ visualize-graph.html

The following is an example of the completed code:

sed -i -e ‘s/PROXY_API_URL/https://7brms4lx43.execute-api.us-east-2.amazonaws.com/test/g’ visualize-graph.html

For MacOS, use the following code:

find . -type f -name visualize-graph.html | xargs sed -i ” ‘s/PROXY_API_URL/<API-Gateway-Endpoint>/g’

The following is an example of the completed code:

find . -type f -name visualize-graph.html | xargs sed -i ” ‘s/PROXY_API_URL/https://7brms4lx43.execute-api.us-east-2.amazonaws.com/test/g’

After you replace the value of PROXY_API_URL in the visualize-graph.html file, upload the file to Amazon S3 using the following command:

–upload the html document with public read access
aws s3 cp ./ s3://<bucket-name> –recursive –exclude “*” –include “vis*” –acl public-read

You’re all set! You can visualize the graph data through this application from the following URL:

http://<bucket-name>.s3-website.<aws-region-code>.amazonaws.com

Visualize the resources on the dashboard

You can now search for resources by a specific ID in the account or search by label. For more information about the resources that are supported and indexed by AWS Config, see Supported Resource Types.

Landing page

The landing page has an option to search via your resource ID or resource label.

Search by ID

You can enter the ID of a specific resource in the Find Resource by ID field and choose Find to populate the dashboard with that resource.

To find resources related to this instance (such as VPC, subnet, or security groups), choose the resource. A visualization appears that shows all the related resources based on the AWS Config relationships.

You can see more relationships by choosing a specific resource and pulling up its relationships.

Search by label

In certain use cases, you might want to view all the resources without looking at any individual resources. For example, you may want to see all the Amazon Elastic Compute Cloud (Amazon EC2) instances without searching for a specific instance. To do so, you search by label. Neptune stores the label of a resource based on the resource type value in AWS Config. For example, if you look at the EC2 resources, you can choose the specific value Instance (AWS::EC2::Instance), SecurityGroup (AWS::EC2::SecurityGroup), or NetworkInterface (AWS::EC2::NetworkInterface).

In this case, you can search for all the SecurityGroup values, and the dashboard is populated with all the security groups that exist in our account.

You can then drill down into the specific security group and start mapping out the relationships (for example, which EC2 instance it’s attached to, if any).

Conclusion

In this post, you saw how you can use Neptune to visualize all the resources in your AWS account. This can help you better understand all the existing relationships through easy-to-use visualizations.

Amazon Neptune now supports graph visualization in Neptune workbench, in our next blogpost we will demonstrate how you can use Neptune workbench to visualize your data.

More details on Neptune workbench in this post.

About the author

Rohan Raizada is a Solutions Architect for Amazon Web Services. He works with enterprises of all sizes with their cloud adoption to build scalable and secure solutions using AWS. During his free time, he likes to spend time with family and go cycling outdoors.

 

 

 

Amey Dhavle is a Senior Technical Account Manager at AWS. He helps customers build solutions to solve business problems, evangelize new technologies and adopt AWS services. Outside of work, he enjoys watching cricket and diving deep on advancements in automotive technologies. You can find him on twitter at @amdhavle.

Read MoreAWS Database Blog

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments