Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Get-Jobs performance with large numbers of jobs #2913

Closed
michaelrsweet opened this issue Aug 22, 2008 · 20 comments
Closed

Improve Get-Jobs performance with large numbers of jobs #2913

michaelrsweet opened this issue Aug 22, 2008 · 20 comments
Labels
enhancement New feature or request
Milestone

Comments

@michaelrsweet
Copy link
Collaborator

Version: 2.0-feature
CUPS.org User: twaugh.redhat

If there are very many completed jobs preserved in the history, an IPP-Get-Jobs operation may tie up the scheduler in get_jobs() for several minutes with 100% CPU usage, preventing jobs being serviced.

Perhaps it ought to be possible to configure CUPS to deny requests that do not use the 'limit' attribute if the result would exceed some certain number of jobs?

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: mike

How many jobs?

Even 500 jobs shouldn't take very long to load...

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: twaugh.redhat

Tens of thousands. Even 5,000 jobs takes over 10s for 'lpstat -Wall -o', and that's without PreserveJobFiles being set.

If PreserveJobFiles is set, it seems that cupsd wants to auto-type every job file as well...

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: mike

OK, I'm pushing this to a RFE for a future release (not 1.4), since the default limit is 500 jobs.

It still shouldn't take that long to load the job history, but we'll just need to do some performance tuning for that use case.

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: rojon

Where comes the limit from? When looking in the current code, i can find only a hard limit in scheduler/ipp.c:6138 limit = 1000000; (cups-1.3.8-r7864) which is set if no limit is set by the requestor. Neither in the cgi-bin's nor in the lpstat, there is a limit imposed other than in ipp.c (yet). Even worser, even if a destination is given, the job-uri is not adapted to limit search only for destination, forcing to build an array of all jobs matching the "which-jobs" tag, before emiting an output. This results in a heavy degradation of the cupsd service. Also this consumes an incredible high amount of memory if you have to create an array of all "MaxJobFiles" elements, even searching for 10 matching entries, or for a specific printer ...

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: mike

The maximum size of the job history is controlled by the MaxJobs directive in cupsd.conf. The default value for this is 500.

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: rojon

This limits the number of max reported Jobs efectivly, nonetheless, this does not impose a limit to IPP_GET_JOBS if you want to store lets say about 50000 Jobs for a Job-History. I think we could change at least lpstat to behave more effective and give cups the chance to serve jobs, even if we retrieve a full list of completed jobs. Find attached a patch to lpstat.c to behave much friendlier ...

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: twaugh.redhat

Attached is a patch to do the same for the web interface.

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: mike

Considering for CUPS 1.5, although I may rev the cupsGetJobs API to handle this for all of the current clients.

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: mike

Pushing to future release; we can't use the patches as-is and I'd like to do some different optimizations when the client is asking for already-cached data.

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: twaugh.redhat

Any update on this? This is essentially a denial of service using what should be a non-privileged operation.

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: mike

The work is queued up for 1.6 and will likely be addressed in the coming weeks.

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: mike

Pushing out a bit; I want to add support for the new first-index attribute, and then we'll apply this. Too late for 1.6...

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: twaugh.redhat

I'm not sure how first-index will help this. If the client doesn't supply that attribute, Get-Jobs will have the same performance problems as it ever has.

Perhaps the timer support (from the Avahi work I did) could be used to break up long operations into fairer portions?

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: mike

Tim, the first-index attribute fixes issues with using first-job-id in the "window fetching" changes you have provided (due to priority and state, job-id's may not come across in numerical order...)

I am thinking about adding a default limit value of 500 (configurable of course) so that clients that just ask for job history will not cause the attributes of all jobs to be loaded; this combined with the latest changes to support time-based history preservation (STR #3143) should mitigate this issue until cupsd can better deal with long history reports. Future versions of cupsd will be multi-threaded as well, so a single long-running operation won't impact other clients like it does today.

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: mike

Fixed in Subversion repository.

The attached patch adds a small amount of additional caching to allow the most common uses of Get-Jobs to work without requiring the full job history to be reloaded. When loading is necessary, we also now limit the number of returned jobs to 500 (and return this limit in the "limit" attribute in the response).

I'm not incorporating the web interface or command changes since they only ask for already-cached data and won't need any special handling. But if someone does extend the web template to show more job attributes we will limit the list to the 500 most recently completed jobs for a given printer (or for all queues for the server-wide job listings).

@michaelrsweet
Copy link
Collaborator Author

"str-2913-lpstat-1.patch":

--- systemv/lpstat.c Sat Jul 12 00:48:49 2008
+++ systemv/lpstat.c Wed Aug 27 22:11:46 2008
@@ -1398,6 +1398,7 @@
title; / Pointer to job-name /
int rank, /
Rank in queue /
jobid, /
job-id */

  •   first,          /\* first-job-id _/
    size;           /_ job-k-octets _/
    
    time_t jobtime; /_ time-at-creation /
    struct tm *jobdate; /
    Date & time */
    @@ -1416,6 +1417,7 @@
    "job-originating-user-name",
    "job-state-reasons"
    };
  • int limit = 25; /* Limit for IPP_GET_JOBS */

DEBUG_printf(("show_jobs(%p, %p, %p)\n", http, dests, users));
@@ -1426,6 +1428,13 @@
if (dests != NULL && !strcmp(dests, "all"))
dests = NULL;

  • rank = -1;
  • jobid = -1;
  • first = 1;
  • while (jobid != 0)

  • {

  • /*

    • Build a IPP_GET_JOBS request, which requires the following
    • attributes:
      @@ -1448,12 +1457,25 @@
      ippAddString(request, IPP_TAG_OPERATION, IPP_TAG_KEYWORD, "which-jobs",
      NULL, which);
  • /*

  • * Search with Limits

  • */

  • ippAddInteger(request, IPP_TAG_OPERATION, IPP_TAG_INTEGER, "limit",

  •   limit);
    
  • ippAddInteger(request, IPP_TAG_OPERATION, IPP_TAG_INTEGER, "first-job-id",

  •   first);
    
  • /*

    • Do the request and get back a response...
      */

    if ((response = cupsDoRequest(http, request, "/")) != NULL)
    {
    +

  • jobid = 0;

/*

  • Loop through the job list and display them...
    */
    @@ -1465,7 +1487,6 @@
    return (1);
    }
  • rank = -1;

for (attr = response->attrs; attr != NULL; attr = attr->next)
{
@@ -1681,7 +1702,10 @@
if (attr == NULL)
break;

}

  • if (jobid)
  •  first = jobid + 1;
    
  • ippDelete(response);
    }
    else
    @@ -1689,6 +1713,8 @@
    _cupsLangPrintf(stderr, "lpstat: %s\n", cupsLastErrorString());
    return (1);
    }
  • } /* Loops until Jobs in List */

return (0);
}

@michaelrsweet
Copy link
Collaborator Author

"str-2913-lpstat-2.patch":

--- systemv/lpstat.c Sat Jul 12 00:48:49 2008
+++ systemv/lpstat.c Wed Aug 27 22:32:02 2008
@@ -1398,6 +1398,7 @@
title; / Pointer to job-name /
int rank, /
Rank in queue /
jobid, /
job-id */

  •   first,          /\* first-job-id _/
    size;           /_ job-k-octets _/
    
    time_t jobtime; /_ time-at-creation /
    struct tm *jobdate; /
    Date & time /
    @@ -1405,7 +1406,8 @@
    *ptr; /
    Pointer into printer name /
    int match; /
    Non-zero if this job matches /
    char temp[255], /
    Temporary buffer */
  •   date[255];      /\* Date buffer */
    
  •   date[255],      /\* Date buffer */
    
  •   joburi[HTTP_MAX_URI];   /\* Job-URI _/
    
    static const char *jattrs[] = /_ Attributes we need for jobs... */
    {
    "job-id",
    @@ -1416,6 +1418,7 @@
    "job-originating-user-name",
    "job-state-reasons"
    };
  • int limit = 25; /* Limit for IPP_GET_JOBS */

DEBUG_printf(("show_jobs(%p, %p, %p)\n", http, dests, users));
@@ -1426,6 +1429,13 @@
if (dests != NULL && !strcmp(dests, "all"))
dests = NULL;

  • rank = -1;
  • jobid = -1;
  • first = 1;
  • while (jobid != 0)

  • {

  • /*

    • Build a IPP_GET_JOBS request, which requires the following
    • attributes:
      @@ -1442,18 +1452,34 @@
      "requested-attributes", sizeof(jattrs) / sizeof(jattrs[0]),
      NULL, jattrs);
  • snprintf(joburi,sizeof(joburi),"ipp://localhost/%s%s",dests ? "printers/" :

  •  "jobs/", dests ? dests : "");
    

    ippAddString(request, IPP_TAG_OPERATION, IPP_TAG_URI, "job-uri",

  •           NULL, "ipp://localhost/jobs/");
    
  •      NULL, joburi);
    

    ippAddString(request, IPP_TAG_OPERATION, IPP_TAG_KEYWORD, "which-jobs",
    NULL, which);

  • /*

  • * Search with Limits

  • */

  • ippAddInteger(request, IPP_TAG_OPERATION, IPP_TAG_INTEGER, "limit",

  •   limit);
    
  • ippAddInteger(request, IPP_TAG_OPERATION, IPP_TAG_INTEGER, "first-job-id",

  •   first);
    
  • /*

    • Do the request and get back a response...
      */

    if ((response = cupsDoRequest(http, request, "/")) != NULL)
    {
    +

  • jobid = 0;

/*

  • Loop through the job list and display them...
    */
    @@ -1465,7 +1491,6 @@
    return (1);
    }
  • rank = -1;

for (attr = response->attrs; attr != NULL; attr = attr->next)
{
@@ -1681,7 +1706,10 @@
if (attr == NULL)
break;

}

  • if (jobid)
  •  first = jobid + 1;
    
  • ippDelete(response);
    }
    else
    @@ -1689,6 +1717,8 @@
    _cupsLangPrintf(stderr, "lpstat: %s\n", cupsLastErrorString());
    return (1);
    }
  • } /* Loops until Jobs in List */

return (0);
}

@michaelrsweet
Copy link
Collaborator Author

"cups-showjobs.patch":

diff --git a/cgi-bin/ipp-var.c b/cgi-bin/ipp-var.c
index 1de7cae..9522667 100644
--- a/cgi-bin/ipp-var.c
+++ b/cgi-bin/ipp-var.c
@@ -1217,48 +1217,92 @@ cgiShowJobs(http_t http, / I - Connection to server /
ipp_attribute_t *job; /
Job object /
int ascending, /
Order of jobs (0 = descending) /
first, /
First job to show */

  •       count;      /\* Number of jobs */
    
  •       final,      /\* This is the last request */
    
  •       returned,   /\* Number of jobs in response */
    
  •       count,      /\* Number of matching jobs */
    
  •       total;      /\* Total number of matching jobs _/
    
    const char *var; /_ Form variable /
    void *search; /
    Search data /
    char url[1024], /
    URL for prev/next/this /
    *urlptr, /
    Position in URL /
    *urlend; /
    End of URL */
  • int job_ids; / Array of job IDs */
  • size_t job_ids_alloc, /* Allocation size for array */
  •       n_job_ids;  /* Elements in array */
    
  • job_ids_alloc = 512;
  • n_job_ids = 0;
  • job_ids = malloc (sizeof (int) * job_ids_alloc);
  • first = 0;
  • final = 0;
  • total = 0;
  • if ((var = cgiGetVariable("ORDER")) != NULL)
  • ascending = !strcasecmp(var, "asc");
  • else
  • {
  • ascending = !which_jobs || !strcasecmp(which_jobs, "not-completed");
  • cgiSetVariable("ORDER", ascending ? "asc" : "dec");
  • }

/*

  • * Build an IPP_GET_JOBS request, which requires the following
  • * attributes:
  • * attributes-charset
  • * attributes-natural-language
  • * printer-uri
  • * Fetch the jobs in batches, not all at once. This gives the
  • * scheduler time to process other requests in between ours.
    */
  • do
  • {
  • int first_job_id;
  • request = ippNewRequest(IPP_GET_JOBS);
  • /*
  • * Build an IPP_GET_JOBS request, which requires the following
  • * attributes:
  • * attributes-charset
  • * attributes-natural-language
  • * printer-uri
  • */
  • if (dest)
  • {
  • httpAssembleURIf(HTTP_URI_CODING_ALL, url, sizeof(url), "ipp", NULL,
  •                 "localhost", ippPort(), "/printers/%s", dest);
    
  • ippAddString(request, IPP_TAG_OPERATION, IPP_TAG_URI, "printer-uri",
  •             NULL, url);
    
  • }
  • else
  • ippAddString(request, IPP_TAG_OPERATION, IPP_TAG_URI, "job-uri", NULL,
  •        "ipp://localhost/jobs");
    
  • request = ippNewRequest(IPP_GET_JOBS);
  • if (dest)
  • {
  •  httpAssembleURIf(HTTP_URI_CODING_ALL, url, sizeof(url), "ipp", NULL,
    
  •          "localhost", ippPort(), "/printers/%s", dest);
    
  •  ippAddString(request, IPP_TAG_OPERATION, IPP_TAG_URI, "printer-uri",
    
  •      NULL, url);
    
  • }
  • else
  •  ippAddString(request, IPP_TAG_OPERATION, IPP_TAG_URI, "job-uri", NULL,
    
  •      "ipp://localhost/jobs");
    
  • if ((which_jobs = cgiGetVariable("which_jobs")) != NULL)
  • ippAddString(request, IPP_TAG_OPERATION, IPP_TAG_KEYWORD, "which-jobs",
  •             NULL, which_jobs);
    
  • if ((which_jobs = cgiGetVariable("which_jobs")) != NULL)
  •  ippAddString(request, IPP_TAG_OPERATION, IPP_TAG_KEYWORD, "which-jobs",
    
  •      NULL, which_jobs);
    
  • cgiGetAttributes(request, "jobs.tmpl");
  • if (first > 0)
  • {
  •  ippAddInteger(request, IPP_TAG_OPERATION, IPP_TAG_INTEGER,
    
  •       "first-job-id", first);
    
  •  first_job_id = first;
    
  • }
  • /*
  • * Do the request and get back a response...
  • */
  • ippAddInteger(request, IPP_TAG_OPERATION, IPP_TAG_INTEGER,
  •     "limit", CUPS_PAGE_MAX);
    
  • cgiGetAttributes(request, "jobs.tmpl");
  • if ((response = cupsDoRequest(http, request, "/")) == NULL)
  •  break;
    
  • /*
  • * We need to know if we got as many jobs as we asked for to know
  • * when we've reached the end.
  • */
  • jobs = cgiGetIPPObjects(response, NULL);
  • returned = cupsArrayCount(jobs);
  • cupsArrayDelete(jobs);
  • if ((response = cupsDoRequest(http, request, "/")) != NULL)
  • {
    /*
  • Get a list of matching job objects.
    /
    @@ -1274,44 +1318,107 @@ cgiShowJobs(http_t *http, /
    I - Connection to server */
    if (search)
    cgiFreeSearch(search);
  • /*
  • * Figure out which jobs to display...
  • */
  • if (final)
  • /*
    
  •  \* We now have the jobs we need to display.
    
  •  */
    
  •  break;
    
  • total += count;
  • for (i = 0; i < count; i++)
  • {
  •  ipp_attribute_t *attr;
    
  •  for (attr = (ipp_attribute_t *) cupsArrayIndex (jobs, i);
    
  •  attr && attr->group_tag != IPP_TAG_ZERO;
    
  •  attr = attr->next)
    
  •  {
    
  • if (!strcmp (attr->name, "job-id") &&
  •   attr->value_tag == IPP_TAG_INTEGER)
    
  • {
  • if (n_job_ids == job_ids_alloc)
    
  • {
    
  •   int *old = job_ids;
    
  •   job_ids_alloc *= 2;
    
  •   job_ids = realloc (job_ids, sizeof (int) \* job_ids_alloc);
    
  •   if (job_ids == NULL)
    
  •   {
    
  •     job_ids = old;
    
  •     break;
    
  •   }
    
  • }
    
  • job_ids[n_job_ids++] = attr->values[0].integer;
    
  • first = attr->values[0].integer + 1;
    
  • break;
    
  • }
  •  }
    
  • }
  • if (returned < CUPS_PAGE_MAX)
  • {
  • /*
    
  •  \* One last request to fetch the jobs we're really interested in.
    
  •  */
    
  •  final = 1;
    
  • /*
    
  •  \* Figure out which jobs we need.
    
  •  */
    
  •  if ((var = cgiGetVariable("FIRST")) != NULL)
    
  • first = atoi(var);
  •  else
    
  • first = 0;
  •  if (!ascending)
    
  • first = total - first - CUPS_PAGE_MAX;
  •  if (first >= total)
    
  • first = total - CUPS_PAGE_MAX;
  •  if (first < 0)
    
  • first = 0;
  •  first = job_ids[first];
    
  •  free (job_ids);
    
  •  if (first == first_job_id)
    
  •   /*
    
  • * We have just requested these jobs so no need to re-fetch.
  • */
  • break;
  • cupsArrayDelete(jobs);
  • ippDelete(response);
  • }
  • } while (final || (returned == CUPS_PAGE_MAX && first > 0));
  • if (response != NULL)
  • {
  • sprintf(url, "%d", total);
  • cgiSetVariable("TOTAL", url);

if ((var = cgiGetVariable("FIRST")) != NULL)
first = atoi(var);
else
first = 0;

  • if (first >= count)

  •  first = count - CUPS_PAGE_MAX;
    
  • if (first >= total)

  •  first = total - CUPS_PAGE_MAX;
    

    first = (first / CUPS_PAGE_MAX) * CUPS_PAGE_MAX;

    if (first < 0)
    first = 0;

  • sprintf(url, "%d", count);

- cgiSetVariable("TOTAL", url);

  • if ((var = cgiGetVariable("ORDER")) != NULL)
  •  ascending = !strcasecmp(var, "asc");
    
  • else
  • {
  •  ascending = !which_jobs || !strcasecmp(which_jobs, "not-completed");
    
  •  cgiSetVariable("ORDER", ascending ? "asc" : "dec");
    

- }

 if (ascending)
 {
  •  for (i = 0, job = (ipp_attribute_t *)cupsArrayIndex(jobs, first);
    
  •  for (i = 0, job = (ipp_attribute_t *)cupsArrayIndex(jobs, 0);
    

    i < CUPS_PAGE_MAX && job;
    i ++, job = (ipp_attribute_t *)cupsArrayNext(jobs))
    cgiSetIPPObjectVars(job, NULL, i);
    }
    else
    {

  •  for (i = 0, job = (ipp_attribute_t *)cupsArrayIndex(jobs, count - first - 1);
    
  •  for (i = 0, job = (ipp_attribute_t *)cupsArrayIndex(jobs, count - 1);
    

    i < CUPS_PAGE_MAX && job;
    i ++, job = (ipp_attribute_t )cupsArrayPrev(jobs))
    cgiSetIPPObjectVars(job, NULL, i);
    @@ -1371,7 +1478,7 @@ cgiShowJobs(http_t *http, /
    I - Connection to server */
    cgiSetVariable("PREVURL", url);
    }

  • if ((first + CUPS_PAGE_MAX) < count)

  • if ((first + CUPS_PAGE_MAX) < total)
    {
    snprintf(urlptr, urlend - urlptr, "FIRST=%d&ORDER=%s",
    first + CUPS_PAGE_MAX, ascending ? "asc" : "dec");

@michaelrsweet
Copy link
Collaborator Author

"str2913.patch":

Index: scheduler/ipp.c

--- scheduler/ipp.c (revision 12066)
+++ scheduler/ipp.c (working copy)
@@ -1511,8 +1511,7 @@
}

if ((attr = ippFindAttribute(con->request, "job-name", IPP_TAG_ZERO)) == NULL)

  • ippAddString(con->request, IPP_TAG_JOB, IPP_TAG_NAME, "job-name", NULL,
  •             "Untitled");
    
  • ippAddString(con->request, IPP_TAG_JOB, IPP_TAG_NAME, "job-name", NULL, "Untitled");
    else if ((attr->value_tag != IPP_TAG_NAME &&
    attr->value_tag != IPP_TAG_NAMELANG) ||
    attr->num_values != 1)
    @@ -1592,6 +1591,9 @@
    ippDeleteAttribute(job->attrs, auth_info);
    }
  • if ((attr = ippFindAttribute(con->request, "job-name", IPP_TAG_NAME)) != NULL)
  • cupsdSetString(&(job->name), attr->values[0].string.text);

if ((attr = ippFindAttribute(job->attrs, "job-originating-host-name",
IPP_TAG_ZERO)) != NULL)
{
@@ -1686,8 +1688,7 @@
ippAddString(job->attrs, IPP_TAG_JOB, IPP_TAG_URI, "job-printer-uri", NULL,
printer->uri);

  • if ((attr = ippFindAttribute(job->attrs, "job-k-octets",
  •                           IPP_TAG_INTEGER)) != NULL)
    
  • if ((attr = ippFindAttribute(job->attrs, "job-k-octets", IPP_TAG_INTEGER)) != NULL)
    attr->values[0].integer = 0;
    else
    ippAddInteger(job->attrs, IPP_TAG_JOB, IPP_TAG_INTEGER, "job-k-octets", 0);
    @@ -4328,8 +4329,9 @@

kbytes = (cupsFileTell(out) + 1023) / 1024;

  • if ((attr = ippFindAttribute(job->attrs, "job-k-octets",
  •                           IPP_TAG_INTEGER)) != NULL)
    
  • job->koctets += kbytes;
  • if ((attr = ippFindAttribute(job->attrs, "job-k-octets", IPP_TAG_INTEGER)) != NULL)
    attr->values[0].integer += kbytes;

cupsFileClose(out);
@@ -4751,7 +4753,55 @@
"job-uri", NULL, job_uri);
}

  • copy_attrs(con->response, job->attrs, ra, IPP_TAG_JOB, 0, exclude);
  • if (job->attrs)
  • {
  • copy_attrs(con->response, job->attrs, ra, IPP_TAG_JOB, 0, exclude);
  • }
  • else
  • {
  • /*
  • * Generate attributes from the job structure...
  • */
  • if (!ra || cupsArrayFind(ra, "job-id"))
  •  ippAddInteger(con->response, IPP_TAG_JOB, IPP_TAG_INTEGER, "job-id", job->id);
    
  • if (!ra || cupsArrayFind(ra, "job-k-octets"))
  •  ippAddInteger(con->response, IPP_TAG_JOB, IPP_TAG_INTEGER, "job-k-octets", job->koctets);
    
  • if (job->name && (!ra || cupsArrayFind(ra, "job-name")))
  •  ippAddString(con->response, IPP_TAG_JOB, IPP_CONST_TAG(IPP_TAG_NAME), "job-name", NULL, job->name);
    
  • if (job->username && (!ra || cupsArrayFind(ra, "job-originating-user-name")))
  •  ippAddString(con->response, IPP_TAG_JOB, IPP_CONST_TAG(IPP_TAG_NAME), "job-originating-user-name", NULL, job->username);
    
  • if (!ra || cupsArrayFind(ra, "job-state"))
  •  ippAddInteger(con->response, IPP_TAG_JOB, IPP_TAG_ENUM, "job-state", (int)job->state_value);
    
  • if (!ra || cupsArrayFind(ra, "job-state-reasons"))
  • {
  •  switch (job->state_value)
    
  •  {
    
  •    default : /\* Should never get here for processing, pending, held, or stopped jobs since they don't get unloaded... */
    
  •   break;
    
  •    case IPP_JSTATE_ABORTED :
    
  •   ippAddString(con->response, IPP_TAG_JOB, IPP_TAG_KEYWORD, "job-state-reasons", NULL, "job-aborted-by-system");
    
  •   break;
    
  •    case IPP_JSTATE_CANCELED :
    
  •   ippAddString(con->response, IPP_TAG_JOB, IPP_TAG_KEYWORD, "job-state-reasons", NULL, "job-canceled-by-user");
    
  •   break;
    
  •    case IPP_JSTATE_COMPLETED :
    
  •   ippAddString(con->response, IPP_TAG_JOB, IPP_TAG_KEYWORD, "job-state-reasons", NULL, "job-completed-successfully");
    
  •   break;
    
  •  }
    
  • }
  • if (job->completed_time && (!ra || cupsArrayFind(ra, "time-at-completed")))
  •  ippAddInteger(con->response, IPP_TAG_JOB, IPP_TAG_INTEGER, "time-at-completed", (int)job->completed_time);
    
  • if (job->completed_time && (!ra || cupsArrayFind(ra, "time-at-creation")))
  •  ippAddInteger(con->response, IPP_TAG_JOB, IPP_TAG_INTEGER, "time-at-creation", (int)job->creation_time);
    
  • }
    }

@@ -6101,9 +6151,13 @@
int port; /* Port portion of URI /
int job_comparison; /
Job comparison /
ipp_jstate_t job_state; /
job-state value */

  • int first_job_id; /* First job ID */

  • int limit; /* Maximum number of jobs to return */

  • int first_job_id = 1, /* First job ID */

  •   first_index = 1,    /\* First index */
    
  •   current_index = 0;  /\* Current index */
    
  • int limit = 0; /* Maximum number of jobs to return /
    int count; /
    Number of jobs that match */

  • int need_load_job = 0; /* Do we need to load the job? */

  • const char job_attr; / Job attribute requested /
    ipp_attribute_t *job_ids; /
    job-ids attribute /
    cupsd_job_t *job; /
    Current job pointer /
    cupsd_printer_t *printer; /
    Printer */
    @@ -6269,8 +6323,7 @@

  • See if they want to limit the number of jobs reported...
    */

  • if ((attr = ippFindAttribute(con->request, "limit",

  •                           IPP_TAG_INTEGER)) != NULL)
    
  • if ((attr = ippFindAttribute(con->request, "limit", IPP_TAG_INTEGER)) != NULL)
    {
    if (job_ids)
    {
    @@ -6282,31 +6335,37 @@

    limit = attr->values[0].integer;
    }

  • else

  • limit = 0;

  • if ((attr = ippFindAttribute(con->request, "first-job-id",

  •                           IPP_TAG_INTEGER)) != NULL)
    
  • if ((attr = ippFindAttribute(con->request, "first-index", IPP_TAG_INTEGER)) != NULL)
    {
    if (job_ids)
    {
    send_ipp_status(con, IPP_CONFLICT,
    _("The %s attribute cannot be provided with job-ids."),

  •         "first-index");
    
  •  return;
    
  • }

  • first_index = attr->values[0].integer;

  • }

  • else if ((attr = ippFindAttribute(con->request, "first-job-id", IPP_TAG_INTEGER)) != NULL)

  • {

  • if (job_ids)

  • {

  •  send_ipp_status(con, IPP_CONFLICT,
    
  •         _("The %s attribute cannot be provided with job-ids."),
          "first-job-id");
    

    return;
    }

    first_job_id = attr->values[0].integer;
    }

  • else

  • first_job_id = 1;

/*

  • See if we only want to see jobs for a specific user...
    */
  • if ((attr = ippFindAttribute(con->request, "my-jobs",
  •                           IPP_TAG_BOOLEAN)) != NULL && job_ids)
    
  • if ((attr = ippFindAttribute(con->request, "my-jobs", IPP_TAG_BOOLEAN)) != NULL && job_ids)
    {
    send_ipp_status(con, IPP_CONFLICT,
    _("The %s attribute cannot be provided with job-ids."),
    @@ -6319,7 +6378,43 @@
    username[0] = '\0';

ra = create_requested_array(con->request);

  • for (job_attr = (char *)cupsArrayFirst(ra); job_attr; job_attr = (char *)cupsArrayNext(ra))
  • if (strcmp(job_attr, "job-id") &&
  • strcmp(job_attr, "job-k-octets") &&
  • strcmp(job_attr, "job-media-progress") &&
  • strcmp(job_attr, "job-more-info") &&
  • strcmp(job_attr, "job-name") &&
  • strcmp(job_attr, "job-originating-user-name") &&
  • strcmp(job_attr, "job-preserved") &&
  • strcmp(job_attr, "job-printer-up-time") &&
  •    strcmp(job_attr, "job-printer-uri") &&
    
  • strcmp(job_attr, "job-state") &&
  • strcmp(job_attr, "job-state-reasons") &&
  • strcmp(job_attr, "job-uri") &&
  • strcmp(job_attr, "time-at-completed") &&
  • strcmp(job_attr, "time-at-creation") &&
  • strcmp(job_attr, "number-of-documents"))
  • {
  •  need_load_job = 1;
    
  •  break;
    
  • }
  • if (need_load_job && (limit == 0 || limit > 500) && (list == Jobs || delete_list))
  • {
  • /*
  • * Limit expensive Get-Jobs for job history to 500 jobs...
  • */
  • ippAddInteger(con->response, IPP_TAG_OPERATION, IPP_TAG_INTEGER, "limit", 500);
  • if (limit)
  •  ippAddInteger(con->response, IPP_TAG_UNSUPPORTED_GROUP, IPP_TAG_INTEGER, "limit", limit);
    
  • limit = 500;
  • cupsdLogClient(con, CUPSD_LOG_INFO, "Limiting Get-Jobs response to %d jobs.", limit);
  • }

/*

  • OK, build a list of jobs for this printer...
    */
    @@ -6345,13 +6440,15 @@
    {
    job = cupsdFindJob(job_ids->values[i].integer);

  •  cupsdLoadJob(job);
    
  •  if (need_load_job && !job->attrs)
    
  •  {
    
  •    cupsdLoadJob(job);
    
  •  if (!job->attrs)
    
  •  {
    
  • cupsdLogMessage(CUPSD_LOG_DEBUG2, "get_jobs: No attributes for job %d",

  •       job->id);
    
  • continue;

  • if (!job->attrs)

  • {

  • cupsdLogMessage(CUPSD_LOG_DEBUG2, "get_jobs: No attributes for job %d", job->id);
    
  • continue;
    
  • }
    }

    if (i > 0)
    @@ -6401,13 +6498,19 @@
    if (job->id < first_job_id)
    continue;

  •  cupsdLoadJob(job);
    
  •  current_index ++;
    
  •  if (current_index < first_index)
    
  •    continue;
    
  •  if (!job->attrs)
    
  •  if (need_load_job && !job->attrs)
    

    {

  • cupsdLogMessage(CUPSD_LOG_DEBUG2, "get_jobs: No attributes for job %d",

  •       job->id);
    
  • continue;

  •    cupsdLoadJob(job);
    
  • if (!job->attrs)

  • {

  • cupsdLogMessage(CUPSD_LOG_DEBUG2, "get_jobs: No attributes for job %d", job->id);
    
  • continue;
    
  • }
    }

    if (username[0] && _cups_strcasecmp(username, job->username))
    @@ -8141,8 +8244,9 @@

cupsdUpdateQuota(printer, job->username, 0, kbytes);

  • if ((attr = ippFindAttribute(job->attrs, "job-k-octets",
  •                           IPP_TAG_INTEGER)) != NULL)
    
  • job->koctets += kbytes;
  • if ((attr = ippFindAttribute(job->attrs, "job-k-octets", IPP_TAG_INTEGER)) != NULL)
    attr->values[0].integer += kbytes;

/*
@@ -9375,8 +9479,9 @@

cupsdUpdateQuota(printer, job->username, 0, kbytes);

  • if ((attr = ippFindAttribute(job->attrs, "job-k-octets",
  •                           IPP_TAG_INTEGER)) != NULL)
    
  • job->koctets += kbytes;
  • if ((attr = ippFindAttribute(job->attrs, "job-k-octets", IPP_TAG_INTEGER)) != NULL)
    attr->values[0].integer += kbytes;

snprintf(filename, sizeof(filename), "%s/d%05d-%03d", RequestRoot, job->id,

Index: scheduler/job.c

--- scheduler/job.c (revision 12066)
+++ scheduler/job.c (working copy)
@@ -1675,9 +1675,10 @@
job->file_time = 0;
job->history_time = 0;

  • if (job->state_value >= IPP_JOB_CANCELED &&
  •  (attr = ippFindAttribute(job->attrs, "time-at-completed",
    
  •              IPP_TAG_INTEGER)) != NULL)
    
  • if ((attr = ippFindAttribute(job->attrs, "time-at-creation", IPP_TAG_INTEGER)) != NULL)
  • job->creation_time = attr->values[0].integer;
  • if (job->state_value >= IPP_JOB_CANCELED && (attr = ippFindAttribute(job->attrs, "time-at-completed", IPP_TAG_INTEGER)) != NULL)
    {
    job->completed_time = attr->values[0].integer;

@@ -1826,6 +1827,12 @@
cupsdSetString(&job->username, attr->values[0].string.text);
}

  • if (!job->name)
  • {
  • if ((attr = ippFindAttribute(job->attrs, "job-name", IPP_TAG_NAME)) != NULL)
  •  cupsdSetString(&job->name, attr->values[0].string.text);
    
  • }

/*

  • Set the job hold-until time and state...
    */
    @@ -1850,6 +1857,9 @@
    job->state_value = IPP_JOB_PENDING;
    }
  • if ((attr = ippFindAttribute(job->attrs, "job-k-octets", IPP_TAG_INTEGER)) != NULL)
  • job->koctets = attr->values[0].integer;

if (!job->num_files)
{
/*
@@ -2155,14 +2165,18 @@
{
cupsFilePrintf(fp, "<Job %d>\n", job->id);
cupsFilePrintf(fp, "State %d\n", job->state_value);

  • cupsFilePrintf(fp, "Created %ld\n", (long)job->creation_time);
    if (job->completed_time)
    cupsFilePrintf(fp, "Completed %ld\n", (long)job->completed_time);
    cupsFilePrintf(fp, "Priority %d\n", job->priority);
    if (job->hold_until)
    cupsFilePrintf(fp, "HoldUntil %ld\n", (long)job->hold_until);
    cupsFilePrintf(fp, "Username %s\n", job->username);
  • if (job->name)
  •  cupsFilePutConf(fp, "Name", job->name);
    
    cupsFilePrintf(fp, "Destination %s\n", job->dest);
    cupsFilePrintf(fp, "DestType %d\n", job->dtype);
  • cupsFilePrintf(fp, "KOctets %d\n", job->koctets);
    cupsFilePrintf(fp, "NumFiles %d\n", job->num_files);
    for (i = 0; i < job->num_files; i ++)
    cupsFilePrintf(fp, "File %d %s/%s %d\n", i + 1, job->filetypes[i]->super,
    @@ -4114,7 +4128,7 @@
    cupsArrayAdd(ActiveJobs, job);
    else if (job->state_value > IPP_JOB_STOPPED)
    {
  •    if (!job->completed_time)
    
  •    if (!job->completed_time || !job->creation_time || !job->name || !job->koctets)
    
    {
    cupsdLoadJob(job);
    unload_job(job);
    @@ -4137,6 +4151,14 @@
    else if (job->state_value > IPP_JOB_COMPLETED)
    job->state_value = IPP_JOB_COMPLETED;
    }
  • else if (!_cups_strcasecmp(line, "Name"))
  • {
  •  cupsdSetString(&(job->name), value);
    
  • }
  • else if (!_cups_strcasecmp(line, "Created"))
  • {
  •  job->creation_time = strtol(value, NULL, 10);
    
  • }
    else if (!_cups_strcasecmp(line, "Completed"))
    {
    job->completed_time = strtol(value, NULL, 10);
    @@ -4161,6 +4183,10 @@
    {
    job->dtype = (cups_ptype_t)atoi(value);
    }
  • else if (!_cups_strcasecmp(line, "KOctets"))
  • {
  •  job->koctets = atoi(value);
    
  • }
    else if (!_cups_strcasecmp(line, "NumFiles"))
    {
    job->num_files = atoi(value);
    Index: scheduler/job.h
    ===================================================================
    --- scheduler/job.h (revision 12066)
    +++ scheduler/job.h (working copy)
    @@ -39,6 +39,8 @@
    * waiting on files /
    char *username; /
    Printing user /
    char *dest; /
    Destination printer or class */
  • char name; / Job name/title */
  • int koctets; /* job-k-octets /
    cups_ptype_t dtype; /
    Destination type /
    cupsd_printer_t *printer; /
    Printer this job is assigned to /
    int num_files; /
    Number of files in job /
    @@ -47,6 +49,7 @@
    ipp_attribute_t *sheets; /
    job-media-sheets-completed /
    time_t access_time, /
    Last access time /
    cancel_time, /
    When to cancel/send SIGTERM */
  •       creation_time,  /\* When job was created _/
        completed_time, /_ When job was completed (0 if not) _/
        file_time,  /_ Job file retain time _/
        history_time,   /_ Job history retain time */
    

@michaelrsweet
Copy link
Collaborator Author

CUPS.org User: twaugh.redhat

Thanks. This looks like a good solution.

@michaelrsweet michaelrsweet added the enhancement New feature or request label Mar 17, 2016
@michaelrsweet michaelrsweet added this to the Stable milestone Mar 17, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant