Stream User’s Guide



Download 0.95 Mb.
Page26/32
Date20.10.2016
Size0.95 Mb.
#6688
1   ...   22   23   24   25   26   27   28   29   ...   32

9.7Import a project


    To import an existing spide project, such as the pre-built spm_demo project:





  1. Select File >> Import.

  2. Expand General in the Import window, then select Existing Projects into Workspace.

  3. Browse to or type in the pathname of the existing project (e.g., /opt/spi/Stream_nnn/demos/spm_demo).

  4. If you wish to work with copies of the imported project files in your workspace, check the Copy projects into workspace checkbox (see the paragraph below concerning readonly files). If you leave the checkbox unchecked, spide works with the project files in the original location instead.

  5. Hit Finish.

After you import the spm_demo project from a Stream distribution, you will need to modify the hardware run configurations testbench_hw and app_hw, as they contain hard-wired user name, IP address and directory pathname specifications for the target hardware.


If you import a project with the Copy projects into workspace checkbox selected, spide preserves the permissions from the original project. If the original project contains readonly files, you must modify the permissions of any files you wish to change in the new workspace. In particular, run configurations (.launch files) should be writable for spide to work as expected.

9.8Use Makefile from command line



spide creates a Makefile in the root directory of a project, as well as module-specific files modules/module.mk that the Makefile references. spide then uses make to build the project. In some circumstances, you might find it convenient to build the project from the command line rather than under spide. This section gives a brief introduction to the spide-generated Makefile, using the spm_demo project as an example. It assumes that you are familiar with make and with Makefiles.
The project Makefile defines the following primary targets:


  • build Build project artifacts

  • clean Clean project artifacts

  • clobber Clobber the entire build tree

  • package Package the project into compressed tarballs

  • printvar-var Display the value of Makefile variable var



Useful variables used in the Makefile include:
ARCH Architecture

MODE Mode

MODULES Modules: all (default), includes, libs, exes, or a module list

PROJECT_ARCHS Architectures

PROJECT_DEPS Dependencies

PROJECT_EXES Executables

PROJECT_INCLUDES Includes

PROJECT_LIBS Libraries

PROJECT_MODES Modes

PROJECT_MODULES Modules

PROJECT_NAME Name

PROJECT_TYPES Types

PROJECT_VERSION Version

TYPE all (default), {exe,lib}_{artifacts,sources}, common

VARIANT_CFLAGS_COMMON C compilation flags common to all modes

VARIANT_CFLAGS_mode Mode-specific C compilation flags for mode

VERBOSE If nonempty, print command lines generated by make
For example, to verbosely build an SP-16 release complete application version of spm_demo, type:
$ make build ARCH=sp16 MODE=release "MODULE=app_dsp app_sys" VERBOSE=1
To remove existing build artifacts and build an SP-16 functional testbench version of spm_demo, type:
$ make clean

$ make build ARCH=sp16 MODE=functional MODULE=testbench


To package the sources for spm_demo into a tarball, type:
$ make package TYPE=exe_sources
This builds tarball build/pkg/spm_demo_1.0.0.0_exe_sources.tgz.


10Performance optimization




This chapter discusses Stream program performance analysis and optimization. The Stream programming model allows the programmer to create Stream programs that use the powerful hardware features of a stream processor efficiently, and the Stream tools produce performance data that guide the programmer toward improved program performance.
A programmer can obtain performance data for a Stream program from profile data generated during program simulation, as well as from timers coded explicitly in the program source. The Performance section of the Command line tools chapter above describes how to generate tabular program performance information with spperf, and the Stream Program Development chapter gives information on how to generate and view tabular and visual performance information using spide. This chapter introduces some basic optimization concepts and shows how to interpret the performance data generated by spperf and spide.
Production analysis breaks down a process into a set of tasks. Each task has a known set of resource requirements, possible dependencies on other tasks, and a required time to completion. Scheduling attempts find an optimal task schedule: a schedule that uses available resources to complete the process as quickly as possible. An optimal schedule has a critical path to completion; increasing the time required for a task on the critical path increases the total time required for the project.
A stream processor presents optimization opportunities at the component level, at the pipeline level, and at the kernel level. The following sections discuss optimization issues and techniques for each level.
Storm-1 Benchmarks describes the Storm-1 benchmark program in distribution directory benchmarks/benchmark/. The program and document provide detailed examples of Stream program optimization.


Download 0.95 Mb.

Share with your friends:
1   ...   22   23   24   25   26   27   28   29   ...   32




The database is protected by copyright ©ininet.org 2024
send message

    Main page