• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
projectsgeek

ProjectsGeek

Download Mini projects with Source Code, Java projects with Source Codes

  • Home
  • Java Projects
  • C++ Projects
  • VB Projects
  • PHP projects
  • .Net Projects
  • NodeJs Projects
  • Android Projects
    • Project Ideas
      • Final Year Project Ideas
      • JSP Projects
  • Assignment Codes
    • Fundamentals of Programming Language
    • Software Design Laboratory
    • Data Structure and Files Lab
    • Computer Graphics Lab
    • Object Oriented Programming Lab
    • Assembly Codes
  • School Projects
  • Forum

System Programming

Calculator using LEX and YACC

June 21, 2011 by ProjectsGeek 3 Comments

Implementation of Calculator using LEX and YACC

Aim: Implementation of Calculator using LEX and YACC.

Objective: To study the process of lexical analysis and parsing.

Theory:

During the first phase the compiler reads the input and converts strings in the source to tokens. With regular expressions we can specify patterns to lex so it can generate code that will allow it to scan and match strings in the input. Each pattern specified in the input to lex has an associated action. Typically an action returns a token that represents the matched string for subsequent use by the parser. Initially we will simply print the matched string rather than return a token value.

The following represents a simple pattern, composed of a regular expression, that scans for identifiers. Lex will read this pattern and produce C code for a lexical analyzer that scans for identifiers.

letter(letter|digit)*

This pattern matches a string of characters that begins with a single letter followed by zero or more letters or digits. This example nicely illustrates operations allowed in regular expressions:

repetition, expressed by the “*” operator

alternation, expressed by the “|” operator

History of Lex & Yacc

�� Lex & Yacc were developed at Bell Laboratories in the 70’s

��Yacc was developed as the first of the two by Stephen C. Johnson

�� Lex was designed by Mike E. Lesk and Eric Schmidt to work with Yacc

�� Standard UNIX utilities

Lex & Yacc

��Programming Tools for writers of compilers and interpreters

��Also interesting for non-compilerwriters

��Any application looking for patterns in its input or having an

input/command language is acandiate for Lex/Yacc

  • lex and yacc help you write programs that transform structured input

– lex — generates a lexical analyzer

• divides a stream of input characters into meaningful units (lexemes), identifies them (token) and may pass the token to a parser generator, yacc

• lex specifications are regular expressions – yacc — generates a parser

• may do syntax checking only or create an interpreter

• yacc specifications are grammar components

Lex:

�� The Unix program “lex” is a “Lexical Analyzer Generator”

– Takes a high-level description of lexical tokens and actions

– Generates C subroutines that implement the lexical analysis

• The name of the resulting subroutine is “yylex”

�� Generally, yylex is linked to other routines, such as the parsing procedures

generated by YACC

  • Organization of a Lex program

%%

%%

�� Translation rules consist of a sequence of patterns associated with actions

�� Lex reads the file and generates a scanner

– Repeatedly locates the “longest prefix of the input that is matched by one or more of

the patterns”

– When the action is found, lex executes the associated action

– In the case of a tie:

• Use whichever regexp uses the most characters

• If same number of characters, the first rule wins

Regular Expressions in Lex

�� References to a single character

– x the character “x”

– “x” an “x”, even if x is an operator

– \x an “x”, even if x is an operator

– (x) an x

– [xy] the character x or y

– [x-z] the character x, y or z

– [ˆx] any character except x

– . any character except newline

�� Repetitions and options

– x? an optional x

– x* 0,1,2, … instances of x

– x+ 1,2,3, … instances of x

Yacc Introduction

�� Yacc is a theoretically complicated, but “easy” to use program that parses input files to

verify that they correspond to a certain language

�� Your main program calls yyparse() to parse the input file

�� The compiled YACC program automatically calls yylex(), which is in lex.yy.c

�� You really need a Makefile to keep it all straight

��Yacc takes a grammar that you specify (in BNF form) and produces a parser that

recognizes valid sentences in your language

��Can generate interpreters, also, if you include an action for each statement that is

executed when the statement is recognized (completed)

The Yacc Parser

�� Parser reads tokens; if token does not complete a rule it is pushed on a stack and the

parser switches to a new state reflecting the token it just read

�� When it finds all tokens that constitute the right hand side of a rule, it pops of the right

hand symbols from the stack and pushes the left hand symbol on the stack (called a

reduction)

�� Whenever yacc reduces a rule, it executes the user code associated with the rule

�� Parser is referred to as a shift/reduce parser

�� yacc cannot run alone — it needs lex

Organization of a Yacc file

��Definition section c

– Declarations of tokens used in grammar, the types of values used on the parser stack

and other odds and ends

– For example, %token PLUS, MINUS,

TIMES, DIVIDE

– Declaration of non-terminals, %union,etc

�� Rules section

– A list of grammar rules in BNF form

– Example:

– Each rule may or may not have an associated action (actions Communication between

Lex and Yacc

�� Whenever Lex returns a token to the parser, that has an associated value, the lexer

must store the value in the global variable yylval before it returns.

�� The variable yylval is of the type YYSTYPE; this type is defined in the file yy.tab.h

(created by yacc using the option ‘–d’).

�� By default it is integer.

�� If you want to have tokens of multiple valued types, you have to list all the values

using the %union declaration





Add Image


Add ImageAdd Image

Other Projects to Try:

  1. Lex and Yacc Calculator code using Unix Script
  2. Lexical analyzer Code in C Language
  3. Lexical Analyzer for Subset of C
  4. Regular Expression to DFA
  5. Software Design Laboratory Assignments

Filed Under: System Programming

Lexical Analyzer for Subset of C

June 21, 2011 by ProjectsGeek 2 Comments

Implement Lexical Analyzer for the subset of C

Aim : Write a program in C to implement Lexical Analyzer for the subset of C.

Objective:

  1. To understand the basic principles in compilation.

  2. To study lexical analysis phase of compiler.

Theory:

Compiler takes input as a source program & produces output as an equivalent sequence of machine instructions. This process consists of two-step processing of source program.

  1. Analysis of source program.

  2. Synthesis of target program.

Analysis step:

It consists of three sub steps

  1. Lexical Analysis – Determine lexical constituents in source program.

  2. Syntax Analysis – Determine structure of source string.

  3. Semantic Analysis – Determine meaning of source string.

Synthesis Step:

It deals with memory allocation & code generation. The actions in analysis phase are uniquely defined for a given language. But synthesis step consists of many action instances where actions depend on the aspect concerning the execution environment of the compiler.

e.g. – Operating system interfaces target machine features such as instruction set, addressing modes, etc.

Lexical analysis: The action of scanning the source program into proper syntactic classes is known as lexical analysis.

Task of Lexical Analysis:

  1. To scan the program into basic elements or tokens of the language.

  2. To build the Uniform symbol table (table of tokens).

  3. To build the symbol & literal table.

  4. To remove white spaces & comments.

  5. To detect errors such as invalid identifier or constant.

Data structures / Databases required:

  1. Source program – Original Source program, which is scanned by compiler as string of characters.

  2. Terminal Table – A permanent database that has entry for each terminal symbols such as arithmetic operators, keywords, punctuation characters such as ‘;’ , ‘,’etc Fields: Name of the symbol.

  3. Literal Table – This table is created during lexical analysis so as to describe all literals in the program.

Fields: Name of the literal.

4. Identifier Table – Created during lexical analysis and describes all identifiers in the program.

Fields: Name of the identifier.

5. Uniform Symbol Table – Created during lexical analysis to represent the program as a string of tokens, rather than of individual characters. Each uniform symbol contains the identification of the table to which it belongs.( IDN – Identifier table, LIT – Literal Table TRM – Terminal Symbol Table)and index within that table.

6. Buffer – One buffer or two buffer schemes to load source program part by part to reduce disk i/o.

Keywords can be stored either in the terminal table or in the keyword table.

Basic steps in algorithm

1. Initialize line no to 1.

2. Read the source program line by line ( Buffer is line)

3. For each line separate the tokens such as

i) identifier/ function name / keywords : Follow the transition diagram to detect this; i.e. letter followed by letter or digit. Search in keyword table for existence of keyword, otherwise it is identifier or function name.

ii) Integer Constant : Digit followed by digit

iii) All types of operators such as >, >=, ++, +, – etc

iv) Remove comments of the type /* */

v) Remove all white spaces.

4. Assign a line no and increment a line number.

5. Repeat steps 2-4 till end of file.

Other Projects to Try:

  1. Regular Expression to DFA
  2. Two Pass Assembler Source Code
  3. Calculator using LEX and YACC
  4. Lexical analyzer Code in C Language
  5. B tech final year project-Source Code Analyzer

Filed Under: System Programming

Regular Expression to DFA

June 21, 2011 by ProjectsGeek 4 Comments

Regular Expression to DFA

Aim : Regular Expression to DFA ( To be taken from compiler point of view)

Objective: – To understand the role of regular expressions and finite automata in applications such as Compilers.

Theory: –

 

Regular expressions are used to specify regular languages and finite automata are used to recognize the regular languages. Many computer applications such as compilers, operating system utilities, text editors make use of regular languages. In these applications, the regular expressions and finite automata are used to recognize this language.

 

Compiler is a program which converts the given source program in high-level language into an equivalent machine language. While doing so, it detects errors and reports errors. This process is quite complex and it is divided into number of phases such as Lexical Analysis, Syntax and Semantic Analysis, Intermediate Code generation, Code Generation and Code Optimization.

The lexical analysis phase of compiler reads the source program, character by character and then groups these characters into tokens which are further passed to next phase, which is nothing but parsing or syntax or semantic analysis. After syntax and semantic analysis, Intermediate Code is generated which is followed by actual code generation.

Lexical Analyzer recognizes the tokens from series of characters. A “C” program consists of tokens such as Identifiers, Integers, Floating Point Numbers, Punctuation symbols, relational and logical and arithmetic operators, keywords and comments (to be removed). To identify these tokens, lexical analyzer needs the specification of each of these symbols. The set of words belonging to a particular token type is a regular language. Hence each of these token types can be specified using regular expressions. For example, consider the token Identifier. In most of the programming languages, an identifier is a word which begins with an alphabet (capital or small) followed by zero or more letters or digits (0..9). This can be defined by the regular expression

(letter) . ( letter | digit)* where

letter = A|B|C|……|Z| a| b |c |……|z

and digit = 0|1|2|….|9

One can specify all token types using regular expressions. These regular expressions are then converted to DFA’s in the form of DFA transition table. Lexical analyzer reads a character from a source program and based on the current state and current symbol read, makes a transition to some other state. When it reaches a final state of DFA, it groups the series of characters so far read and outputs the token found.

 

Formal definition of Regular expression

 

The class of regular expressions over ∑ is defined recursively as follows:

1. The letters φ and Є are regular expressions over ∑ .

2. Every letter ‘a’ c ∑is a regular expression over ∑ .

3 If ‘R1’ and ‘R are regular expressions over ∑, then so are ‘(R1|R2)’, ‘(R1.R2)’ and (R1)*

Where

‘|’ indicates alternative or parallel paths.

‘.’ Indicates concatenation

‘*’ indicates closure

4. The regular expressions are only those that are obtained using rules (1) and (2).

 

Formal definition of DFA:

 

The formal definition of finite automata is denoted by a tuple

( Q, ∑,d, qo, f)

Where

Q à Finite set of table

∑à finite input alphabet

qoà Initial state of FA,qo, qo Q

Fà set of final states, F c Q

dà Transition function called as state function mapping

Q * ∑ à Q

i.e. d= Q*∑à Q

A FA is called deterministic (DFA) if from every vertex of its transition graph, there is an unique input symbol which takes the vertex state to the required next state.

DFA is constructed directly from an augmented regular expression ( r )#. We begin by constructing a syntax tree T for ( r ) # and then we compute four functions Nullable, Firstpos, Lastpos and Followpos. The functions Nullable, Firstpos, Lastpos are defined on the nodes of a syntax tree and are used to compute Followpos which is defined on set of positions. We can short circuit the construction of NFA by building the DFA whose states correspond to the sets of positions in the tree. Positions, in particular , encode the information regarding when one position can follow another. Each symbol in an input string to a DFA can be matched by certain positions. An input symbol ‘c’ can only be matched by positions at which there is a ‘c’ but not every position with a ‘c’ can necessarily match a particular occurrences of ‘c’ in input stream.

 

 

 

Algorithm

 

The steps in algorithm are

  1. Accept the given regular expression with end of character as #
  2. Covert the regular expressions to its equivalent postfix form manually. ( students need not write the code for converting infix to postfix but, they can directly accept postfix form of the infix expression)
  3. Construct a syntax tree from the postfix expression obtained in step 2.
  4. Assign positions to leaf nodes
  5. Compute following functions.

Nullable, Firstpos, Lastpos, Followpos

 

 

Computation of Nullables : All nodes except the * nodes are not nullable.

Also if some leaf node is for έ, then it is also nullable.

Firstpos (Firstposition): At each node n of the syntax tree of a regular expression, we define a function firstpos(n) that gives the set of first positions that can match first symbol of a string generated by sub expression rooted at ‘n’.

Lastpos (lastposition) : At each node n of the syntax tree of a regular expression, we define a function lastpos(n) that gives the set of last positions that can match last symbol of a string generated by sub expression rooted at ‘n’.

To compute firstposition and last position, we need to know which nodes are the roots of sub expression that generate languages that include the empty string. Such nodes are Nullable. We define nullable(n) to be true if node ‘n’ is nullable , false otherwise.

Computation of Followpos : Followpos(i) tells us what positions can follow position i

in the syntax tree. This can be computed as follows.

1. if n is a ‘.’ (cat) Node, with a left child C1 and right child C2 and i is a position in the Lastpos(C1), then all positions in Firstpos(C2) are in Followpos(i)

2. if n is a * (closure) Node and i is a position in the Lastpos(n), then all positions in Firstpos(n) are Followpos(i)

6. Construct DFA from Follow Pos.

 

Note : Step 5 can be done during construction of tree, since you are building the tree from bottom to top, and when computations at some root of sub tree are to be done, information of sub tree is available. So no need to do any traversal.

 

Data Structures:

 

 

 

Node Structure for Parse Tree

{

Leftchild and Rightchild : pointers to the node structure

Nullable : Boolean Type

Data : Character Type

Fistpos and Lastpos : set of integers

Pos : integer (this may or may not be part of tree node)

}

Stack : Stack is required to build the tree. This can be implemented either using link list (preferable) or as an array. Item or data that will be pushed into or popped out from stack is pointer to the node structure of a Parse Tree and not just a single character..

 

 

 

 

 

 

 

 

Computations of Firstpos and Lastpos.

Node n

Nullable(n)

Firstpos(n)

Lastpos(n)

N is a leaf labeled έ

true

 

ø

 

ø

Nullable(c1) or

 

Nullable (c2)

Firstpos (C1) U Firstpos(C2)

Lastpos (C1) U Lasttpos(C2)

N is a leaf labeled with position i

false

{ i }

{ i }

 

Nullable(c1) and

 

Nullable (c2)

If nullable (C1) then

Firstpos (C1) U Firstpos(C2) else Firstpos(C1)

If nullable (C2) then

Lastpos (C1) U Lastpos(C2) else Lastpos(C2)

 

true

 

Firstpos (C1)

Lastpos (C1)

Algorithm for construction of DFA transition table

 

  1. Initially , the only unmarked state in Dstates is firstpos(root), where root is the root of a syntax tree.
  2. While there is an unmarked state T in Dstates do

Begin

Mark T

For each input symbol a do

Begin

Let U be the set of positions that are in Followpos(P) for some P in T,

such that the symbol at position P is a.

If U is not empty and is not in Dstates then add U as an unmarked

state to Dstates

Dtran [T,a] = U

End

End

 

 

 

 

Other Projects to Try:

  1. Regular Expression to DFA Code in C Language
  2. Calculator using LEX and YACC
  3. Software Design Laboratory Assignments
  4. BFS AND DFS Algorithm using C Language
  5. Expression Tree using C Language

Filed Under: System Programming

Macro Processor Algorithm

June 21, 2011 by ProjectsGeek Leave a Comment

Macro Processor

Aim: Implementation of Macro Processor. Following cases to be considered 

  • Macro without any parameters
  • Macro with Positional Parameters
  • Macro with Key word parameters
  • Macro with positional and keyword parameters.
Conditional expansion , nested macro implementation not expected Objective: 

  • To understand macro facility, features and its use in assembly language programming.
  • To study how the macro definition is processed and how macro call results in the expansion of code.

 

Theory :
An assembly language macro facility is to extend the set of operations provided in an assembly language.
In order that programmers can repeat identical parts of their program macro facility can be used. This permits the programmer to define an abbreviation for a part of program & use this abbreviation in the program. This abbreviation is treated as macro definition & saved by the macro processor. For all occurrences the abbreviation i.e. macro call, macro processor substitutes the definition.
Macro definition part:
It consists of
  • Macro Prototype Statement – this declares the name of macro & types of parameters.
  • Model statement – It is statement from which assembly language statement is generated during macro expansion.
  • Preprocessor Statement – It is used to perform auxiliary function during macro expansion.

 

Macro Call & Expansion
The operation defined by a macro can be used by writing a macro name in the mnemonic field and its operand in the operand field. Appearance of the macro name in the mnemonic field leads to a macro call. Macro call replaces such statements by sequence of statement comprising the macro. This is known as macro expansion.
Macro Facilities
  • Use of AIF & AGO allows us alter the flow of control during expansion.
  • Loops can be implemented using expansion time variables.

 

Design Procedure 

  • Definition processing – Scan all macro definitions and for each macro definition enter the macro name in macro name table (MNT). Store entire macro definition in macro definition table (MDT) and add auxiliary information in MNT such as no of positional parameters (#PP) no of key word parameters (#KP), macro definition table position (MDTP) etc.
  • Macro expansion – Examine all statement in assembly source program to detect the macro calls. For each macro call locate the macro in MNT, retrieve MDTP, establish the correspondence between formal & actual parameters and expand the macro.

 

Data structures required for macro definition processing –

  • Macro Name Table [MNT] – Fields-Name of Macro, #pp (no of positional parameters), # kp( no of keyword parameters), , MDTP ( Macro Definition Table Pointer), Keyword Parameters Default Table Position (KPDTP),
  •  Parameter Name Table [PNTAB] –Fields – Parameter Name
  •  Keyword parameter Default Table [KPDTAB] –Fields – Parameter Name, Default value
  •  Macro Definition Table [MDT] –Model Statement are stored in the intermediate code from as: Opcode, Operands.
Algorithm for definition processing: 

Before processing any definition initialize KPDTAB_ptr, MDT_ptr to 0 and MNT_ptr to -1. These table pointers are common to all macro definitions. For each macro definition perform the following steps. 

  1. Initialize PNTAB – ptr to 0 & fields of MNT, # pp, # kp to 0 and increment MNT_ptr by 1.
  2. For macro prototype statement from MNT entry.
    1. Entry name into name field.
    2. For each position parameter field
  • Enter name in parameter name table.
  • Increment PNTAB – ptr by 1.
  • Increment # pp by 1.

 

c. KPDTP ß KPDTAB – ptr 

d. For each keyword parameter 

i.Enter name & default value in KPDTAB. 

ii.Increment KPTAB –ptr by 1. 

iii.Enter name in PNTAB & increment PNTAB – ptr by 1. 

iv.Increment # kp by 1. 

e. MDTP ß MDT – ptr ( current MDT Ptr) 

3. 

Read next statement 

a. Model statement 

i. For parameter generate specification (p, # n). 

ii.Record intermediate code in MDT. 

iii.Increment MDT – ptr by 1. 

end 

b.If MEND statement 

Begin 

Enter MEND in MDT, increment MDT_ptr by 1. 

If #kp == 0 then KPDTP = 0 

Return to main logic ie step 6 of main logic. 

Data structures required for expansion processing :-

1.Actual parameter table: APTAB 

2.Macro expansion counter : MEC. 

Algorithm for macro expansion:

1. Initialization
i.MEC ß MDTP from MNT.
ii.Create APTAB with # pp & # kp entries and set APTAB ptr accordingly.
iii.Copy keyword parameter defaults from KPDTAB in APTAB[pp] to APTAB[#pp + #kp -1].
iv.Process actual positional parameters in call and copy them in APTAB from 0 to # pp-1.
v.For keyword parameter specification search name in parameter name field of KPDTAB, get matching entry in q & enter value in APTAB [ #pp + q – KPDTP ]. 

2. While Statement pointed by MEC in MDT is not MEND. 

i.If model statement then replace operands of the form (p, #n) by values in APTAB.
ii.Increment MEC by one.
iii.Write the model statement on expanded code file. 

3. Exit from macro expansion 

Main Program Logic

 

  • Initialize KPDTAB_ptr, MDT_ptr to 0 and MNT_ptr to -1. These table pointers are common to all macro definitions ( There could be more than one macro definition in program)
  • Read the statement from source file, one line at time
  • Separate the words from that line and count the no of words. Save the separated words in the array say word which is a array of strings
  • If count for words in a line is one then check if that only word matches with “MACRO” keyword, if MACRO keyword found then perform definition processing.
  • If it does not match then check whether first word of the line matches with any of the entries in the macro name table. (Search the complete macro name table for presence of macro call), if so then perform macro expansion routine.
  • If no Macro call or no definition then enter the line as it is in the expanded code file.
  • If not end of file go to step 3.

 

Other Projects to Try:

  1. Macro Processor Pass Two in C Language
  2. Implementation of Single Pass Algorithm for Clustering
  3. Krushkals algorithm Code in C Language
  4. Bankers Algorithm using C Language OS problem
  5. Kruskal’s Algorithm , Prims Algorithm

Filed Under: System Programming

Two Pass Assembler Source Code

June 21, 2011 by ProjectsGeek 4 Comments

Implementation of TWO Pass assembler

 

Aim

Implementation of TWO Pass assembler with hypothetical Instruction set Instruction set should include all types of assembly language statements such as Imperative, Declarative and Assembler Directive. While designing stress should be given on

  • How efficiently Mnemonic opcode table could be implemented so as to enable faster retrieval on op-code.
  • Implementation of symbol table for faster retrieval. ( Concepts in DSF should be applied while design)

TWO Pass assembler

Objective

To learn the basic translation process of assembly language to machine Language.

Theory

A language translator bridges an execution gap to machine language of computer system. An assembler is a language translator whose source language is assembly language.

Language processing activity consists of two phases, Analysis phase and synthesis phase. Analysis of source program consists of three components, Lexical rules, syntax rules and semantic rules. Lexical rules govern the formation of valid statements in source language. Semantic rules associate the formation meaning with valid statements of language. Synthesis phase is concerned with construction of target language statements, which have the same meaning as source language statements. This consists of memory allocation and code generation.

Analysis of source program statements may not be immediately followed by synthesis of equivalent target statements. This is due to forward references issue concerning memory requirements and organization of Language Processor (LP).

Forward reference of a program entity is a reference to the entity, which precedes its definition in the program. While processing a statement containing a forward reference, language processor does not posses all relevant information concerning referenced entity. This creates difficulties in synthesizing the equivalent target statements. This problem can be solved by postponing the generation of target code until more information concerning the entity is available. This also reduces memory requirements of LP and simplifies its organization. This leads to multi-pass model of language processing.

Language Processor Pass

It is the processing of every statement in a source program or its equivalent representation to perform language-processing function.

Assembly Language statements

There are three types of statements Imperative, Declarative, Assembly directives. An imperative statement indicates an action to be performed during the execution of assembled program. Each imperative statement usually translates into one machine instruction. Declarative statement e.g. DS reserves areas of memory and associates names with them. DC constructs memory word containing constants. Assembler directives instruct the assembler to perform certain actions during assembly of a program, e.g. START directive indicates that the first word of the target program generated by assembler should be placed at memory word with address

Function Of Analysis And Synthesis Phase

Analysis Phase

  • Isolate the label operation code and operand fields of a statement.
  • Enter the symbol found in label field (if any) and address of next available machine word into symbol table.
  • Validate the mnemonic operation code by looking it up in the mnemonics table.
  • Determine the machine storage requirements of the statement by considering the mnemonic operation code and operand fields of the statement.
  • Calculate the address of the address of the first machine word following the target code generated for this statement (Location Counter Processing)

Synthesis Phase

  • Obtain the machine operation code corresponding to the mnemonic operation code by searching the mnemonic table.
  • Obtain the address of the operand from the symbol table.
  • Synthesize the machine instruction or the machine form of the constant as the case may be.

Design of a Two Pass Assembler

Tasks performed by the passes of two-pass assembler are as follows:

Pass I

  • Separate the symbol, mnemonic opcode and operand fields.
  • Determine the storage-required foe every assembly language statement and update the location counter.
  • Build the symbol table and the literal table.
  • Construct the intermediate code for every assembly language statement.

Pass II

Synthesize the target code by processing the intermediate code generated during

Data structures required for pass I

  • Source file containing assembly program.
  • MOT: A table of mnemonic op-codes and related information.

It has the following fields

Index Mnemonic TYPE OP-Code Length Link
0 ADD IS 01 01 -1
1 BC IS 07 01 -1
2 COMP IS 06 01 -1
3 DIV IS 08 01 5
4 EQU AD 03 – 7
5 DC DL 01 – 6
6 DS DL 02 – -1
7 END AD 05 – -1

 

Mnemonic : Such as ADD, END, DC

TYPE : IS for imperative, DL for declarative and AD for Assembler directive

OP- code : Operation code indicating the operation to be performed.

Length : Length of instruction required for Location Counter Processing

Hash table Implementation of MOT to minimize the search time required for searching the instruction.

Hash Function used is ASCII Value of the First letter of Mnemonic – 65. This helps in retrieving the op- code and other related information in minimum time. For Example the instruction starting with alphabet ‘A’ will be found at index location 0, ‘B’ at index 1, so on and so forth. If more instructions exist with same alphabet then the instruction is stored at empty location and the index of that instruction is stored in the link field. Thus instructions starting with alphabet ‘D’ will be stored at index locations 3,5,and 6. Those starting with E will be stored at 4 and 7 and the process continues.

SYMTB: The symbol table.

Fields are Symbol name, Address (LC Value). Initialize all values in the address fields to -1 and when symbol gets added when it appears in label field replace address value with current LC. The symbol if it used but not defined will have address value -1 which will be used for error detection.

Symbol Address
Loop 204
Next 214

 

Literal Address
= ‘5’
= ‘1’
=‘1’

 

Intermediate form used Variant 1 / Variant 2

Students are supposed to write the variant used by them.

Data Structure used by Pass II

  • OPTAB: A table of mnemonic opcodes and related information.
  • SYMTAB: The symbol table
  • LITTAB: A table of literals used in the program
  • Intermediate code generated by Pass I
  • Output file containing Target code / error listing.

Algorithm

  • Open the source file in input mode.
  • if end of file of source file go to step 8.
  • Read the next line of the source program
  • Separate the line into words. These words could be stored in array of strings.
  • Search for first word is mnemonic opcode table, if not present it is a label , add this as a symbol in symbol table with current LC. And then search for second word in mnemonic opcode table.
  • If instruction is found

case 1 : imperative statement

case 2: Declarative statement

case 3: Assembler Directive

  • Generate Intermediate code and write to Intermediate code file.
  • go to step 2.
  • Close source file and open intermediate code file
  • If end of file ( Intermediate code), go to step 13
  • Read next line from intermediate code file.
  • Write opcode, register code, and address of memory( to be fetched from literal or symbol table depending on the case) onto target file. This is to be done only for Imperative statement.
  • go to step 9.
  • Close all files.
  • Display symbol table, literal table and target file.

Imperative statement case

If opcode >= 1 && opcode <=8 ( Instruction requires register operand)

a. Set type as IS, get opcode, get register code, and make entry into symbol or literal table as the case may be. In case of symbol, used as operand, LC field is not known so LC could be -1. Perform LC processing LC++. Updating of symbol table should consider error handling.

if opcode is 00 ( stop) :

Set all fields of Intermediate call as 00. LC++

else register operand not required ( Read and Print)

Same as case 1, only register code is not required, so set it to zero. Here again update the symbol table. LC++

On similar lines we can identify the cases for declarative and assembler directive statements based on opcode.

List of hypothetical instructions:

Instruction Assembly Remarks

Opcode mnemonic

00 STOP stop execution

01 ADD first operand modified condition code set

02 SUB first operand modified condition code set

03 MULT first operand modified condition code set

04 MOVER register memory

05 MOVEM memory register

06 COMP sets condition code

07 BC branch on condition code

08 DIV analogous to SUB

09 READ first operand is not used.

10 PRINT first operand is not used.

Errors

Forward reference(Symbol used but not defined): –

This error occurs when some symbol is used but it is not defined into the program.

Duplication of Symbol

This error occurs when some symbol is declared more than once in the program.

Mnemonic error

If there is invalid instruction then this error will occur.

Register error

If there is invalid register then this error will occur.

Operand error

This error will occur when there is an error in the operand field,

Other Projects to Try:

  1. Macro Processor Algorithm
  2. Lexical Analyzer for Subset of C
  3. B tech final year project-Source Code Analyzer
  4. 100+ .Net mini Projects with Source Code
  5. Macro Processor Pass Two in C Language

Filed Under: System Programming

Lex and Yacc Calculator code using Unix Script

May 4, 2011 by ProjectsGeek Leave a Comment

Lex and Yacc Calculator code

Implementation of Calculator using LEX and YACC Calculator code.

Lex and Yacc Calculator code-Lex

%{  
 # include  
 # include "y.tab.h"  
 # include  
 %}  
 %%  
 [0-9]+(\.[0-9]+)? { yylval.dval=atof(yytext);return NUMBER;}  
 [\t] ; /* ignore whitespace */  
 \n {return 0;}  
 . return yytext[0];  
 %%  
 int yywrap(void)  
 {  
   return 1;  
 }

Lex and Yacc Calculator code-Yacc

 

 %{  
 # include  
 # include  
 %}  
 %union  
 {  
   double dval;  
 }  
 %token  NUMBER ;  
 %left '-''+'  
 %left '*''/'  
 %right '^'  
 %type  expr  
 %%  
 result:    
   '='expr   
   |expr {printf("%f",$1);}  
   ;  
 expr:  
   expr'+'expr {$$=$1+$3;}  
   | expr'-'expr {$$=$1-$3;}  
   | expr'*'expr {$$=$1*$3;}  
   | expr'^'expr   {  
       $$=pow($1,$3);}  
   | expr'/'expr {  
     if($3==0)  
       yyerror("devide by zero");  
     else  
       $$=$1/$3;}  
   | '-'expr   {$$=-$2;}  
   | '('expr')' { $$=$2;}  
   | NUMBER {$$=$1;}  
   ;  
 %%  
 void yyerror(char *s)  
 {  
   fprintf(stdout,"%s\n",s);  
 }  
 int main()  
 {  
   yyparse();  
   return 0;  
 }

 

 

Other Projects to Try:

  1. Calculator using LEX and YACC
  2. Lexical analyzer Code in C Language
  3. Regular Expression to DFA Code in C Language
  4. Inter process Communication(IPC)-Client Code
  5. 30+ Android Projects with Source Code

Filed Under: System Programming

  • Page 1
  • Page 2
  • Go to Next Page »

Primary Sidebar

Tags

.Net Projects Download Android Project Ideas Android Projects Angular 2 Assembly Codes C # Projects C & C++ Projects C++ Projects Class Diagrams Computer Graphics Database Project Data Mining Projects DataScience Projects Datastructure Assignments Download Visual Basic Projects Electronics project Hadoop Projects Installation Guides Internet of Things Project IOS Projects Java Java Interview Questions Java Projects JavaScript JavaScript Projects java tutorial JSON JSP Projects Mechanical Projects Mongodb Networking Projects Node JS Projects OS Problems php Projects Placement Papers Project Ideas Python Projects seminar and presentation Struts

Search this Website


Footer

Download Java Project
Download Visual Basic Projects
Download .Net Projects
Download VB Projects
Download C++ Projects
Download NodeJs Projects
Download School Projects
Download School Projects
Ask Questions - Forum
Latest Projects Ideas
Assembly Codes
Datastructure Assignments
Computer Graphics Lab
Operating system Lab
australia-and-India-flag
  • Home
  • About me
  • Contact Form
  • Submit Your Work
  • Site Map
  • Privacy Policy