Creating custom log for AWS Lambda

Author - Richa Sharma

Creating custom log files for AWS Lambda function using Java other than using CloudWatch

For developers who are using AWS Lambda function either for executing several concurrent processes or for processing large files; monitoring logs in CloudWatch is definitely a cumbersome task for them. The reasons are that firstly, CloudWatch creates a separate log thread for each running Lambda function at its end where you can view the logs but can’t export the contents to some external file if you need the complete logging. Secondly, at one time, you are unable to view complete logs for a function as at one time only few lines of logs are visible to you. You need to scroll down again and again to view more content. Moreover, if you have a large number of logs, it becomes very difficult to reach to the end of the file, even if you just want to view the completion time of the function execution. Another issue is that as the complete content of the logs are not accessible at one time, we can’t search for any error or anything in the file if we want to. In short, the main issue we face while using CloudWatch for monitoring logs is the limitation in accessing logs.

For java applications which are running on servers, we can log the events in log files using log4J or any other logging framework. In fact, here in case of Lambda function, a couple of logging frameworks can be used such as Custom Appender for Log4J 2 and LambdaLogger.log() (for more visit this link – However, at the end all of them will log the events in CloudWatch logs instead of a separate log file. Hence, as of now, there is no possible solution for logging events in a separate log file while executing Lambda function.

To resolve this issue, we have come up with a solution where the logs will be written in a file during execution of Lambda function and before the function completes the execution, the file will be uploaded on S3 server. In that way, we can create our own custom log file in simplest way without using any logging framework.

Take a look at the below example:

package sampleProgram;

import java.sql.Timestamp;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;

// your class needs to implement RequestHandler interface to be a Lambda function
public class LambdaLog implements RequestHandler{ 	
       //create a ThreadLocal instance 'log' which means that it can be read and written by same thread.
       private static final ThreadLocal logs = new ThreadLocal(){
           protected String initialValue()
              return new String("");
    	//instantiate AWS service client 
    AmazonS3 s3Client = AmazonS3ClientBuilder.standard().build(); 
    //this method is defined inside RequestHandler interface and needs to be implemented here
    public String handleRequest(S3Event s3event, Context context) {         
      	    //obtaining S3 bucket attributes
	    S3EventNotificationRecord record = s3event.getRecords().get(0);
	    String srcBucket = record.getS3().getBucket().getName();
	    String dstBucket = 	"destinationBucketName";	
//creating log file which will be created inside destinationBucket with path as LambdaLogs/today'date/logFile.txt
	    String logFile = "LambdaLogs/""/logFile.txt";
	    //instead of writing System.out.println, use set method of logs variable
	    logs.set(logs.get()+new Timestamp(System.currentTimeMillis()) + " " +"Processing started."+"\n");
	    logs.set(logs.get()+new Timestamp(System.currentTimeMillis()) + " " + "destination bucket:" + dstBucket + "\n");
        	    try {
        		//process your input file with required logic
		 logs.set(logs.get()+new Timestamp(System.currentTimeMillis()) + " " + "Some logs” + "\n");

return "Success";
          	   }catch(AmazonServiceException e) {
        	 // The call was transmitted successfully, but Amazon S3 couldn't process it, so it returned an error response.
            logs.set(logs.get()+new Timestamp(System.currentTimeMillis()) + " " +e.getMessage() +"\n");
            return "error";
        } catch(SdkClientException e) {
        	// Amazon S3 couldn't be contacted for a response, or the client couldn't parse the response from S3.
        	logs.set(logs.get()+new Timestamp(System.currentTimeMillis()) + " " + e.getMessage() +"\n");
            return "error";
        		//generate log file and upload on S3
        	logs.set(logs.get() + new Timestamp(System.currentTimeMillis()) + " File processing ends.");
	String fileContent = logs.get(); 	//get content of logs and write in the file
	byte[] bytes;
	try {
		bytes = fileContent.getBytes("UTF-8");
		InputStream is = new ByteArrayInputStream(bytes);
		ObjectMetadata meta = new ObjectMetadata();
		s3Client.putObject(dstBucket, logFile, is, meta);
		logs.set("");	//empty logs variable to prevent it for use by another concurrent thread
	} catch (UnsupportedEncodingException e) {
		logs.set(logs.get() + new Timestamp(System.currentTimeMillis()) + " " +e.getMessage());

To execute above lambda function, follow the steps mentioned here ( After successful execution of the function, you can check that output file has been created in the S3 bucket specified in above code. Along with this, the log file will be generated at mentioned location as:


2018-06-29 10:55:17.729 Processing started.
2018-06-29 10:55:17.729 destination bucket: destinationBucketName
2018-06-29 10:55:18.122 Some logs
2018-06-29 10:55:19.269 File Processing ends.

Hence, using above process we can create our own log file and place it at any desired location.

Webner Solutions is a Software Development company focused on developing Insurance Agency Management Systems, Learning Management Systems and Salesforce apps. Contact us at for your Insurance, eLearning and Salesforce applications.

Leave a Reply

Your email address will not be published. Required fields are marked *