Jan 14 01:04:42.085905 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:26:24 -00 2026 Jan 14 01:04:42.085962 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:04:42.085997 kernel: BIOS-provided physical RAM map: Jan 14 01:04:42.086012 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 14 01:04:42.086025 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 14 01:04:42.086039 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 14 01:04:42.086055 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 14 01:04:42.086071 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 14 01:04:42.086086 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd2e4fff] usable Jan 14 01:04:42.086106 kernel: BIOS-e820: [mem 0x00000000bd2e5000-0x00000000bd2eefff] ACPI data Jan 14 01:04:42.086121 kernel: BIOS-e820: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] usable Jan 14 01:04:42.086136 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 14 01:04:42.086152 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 14 01:04:42.086167 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 14 01:04:42.086188 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 14 01:04:42.086204 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 14 01:04:42.086220 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 14 01:04:42.086236 kernel: NX (Execute Disable) protection: active Jan 14 01:04:42.086251 kernel: APIC: Static calls initialized Jan 14 01:04:42.086267 kernel: efi: EFI v2.7 by EDK II Jan 14 01:04:42.086284 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 RNG=0xbfb73018 TPMEventLog=0xbd2e5018 Jan 14 01:04:42.086300 kernel: random: crng init done Jan 14 01:04:42.086315 kernel: secureboot: Secure boot disabled Jan 14 01:04:42.086331 kernel: SMBIOS 2.4 present. Jan 14 01:04:42.086370 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 14 01:04:42.086386 kernel: DMI: Memory slots populated: 1/1 Jan 14 01:04:42.086401 kernel: Hypervisor detected: KVM Jan 14 01:04:42.086418 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 14 01:04:42.086434 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 01:04:42.086450 kernel: kvm-clock: using sched offset of 11913633930 cycles Jan 14 01:04:42.086467 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 01:04:42.086483 kernel: tsc: Detected 2299.998 MHz processor Jan 14 01:04:42.086506 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:04:42.086528 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:04:42.086545 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 14 01:04:42.086561 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 14 01:04:42.086578 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:04:42.086595 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 14 01:04:42.086610 kernel: Using GB pages for direct mapping Jan 14 01:04:42.086627 kernel: ACPI: Early table checksum verification disabled Jan 14 01:04:42.086654 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 14 01:04:42.086669 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 14 01:04:42.086686 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 14 01:04:42.086703 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 14 01:04:42.086722 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 14 01:04:42.086745 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 14 01:04:42.086765 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 14 01:04:42.086784 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 14 01:04:42.086803 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 14 01:04:42.086821 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 14 01:04:42.086840 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 14 01:04:42.086863 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 14 01:04:42.086882 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 14 01:04:42.086901 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 14 01:04:42.086920 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 14 01:04:42.086939 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 14 01:04:42.086958 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 14 01:04:42.086976 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 14 01:04:42.087004 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 14 01:04:42.087027 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 14 01:04:42.087046 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 14 01:04:42.087064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 14 01:04:42.087083 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 14 01:04:42.087102 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jan 14 01:04:42.087122 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jan 14 01:04:42.087141 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Jan 14 01:04:42.087165 kernel: Zone ranges: Jan 14 01:04:42.087183 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:04:42.087202 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 01:04:42.087221 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 14 01:04:42.087239 kernel: Device empty Jan 14 01:04:42.087258 kernel: Movable zone start for each node Jan 14 01:04:42.087276 kernel: Early memory node ranges Jan 14 01:04:42.087299 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 14 01:04:42.087318 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 14 01:04:42.088058 kernel: node 0: [mem 0x0000000000100000-0x00000000bd2e4fff] Jan 14 01:04:42.088089 kernel: node 0: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] Jan 14 01:04:42.088108 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 14 01:04:42.088129 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 14 01:04:42.088148 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 14 01:04:42.088167 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:04:42.088193 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 14 01:04:42.088213 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 14 01:04:42.088232 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Jan 14 01:04:42.088250 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 14 01:04:42.088267 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 14 01:04:42.088284 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 14 01:04:42.088301 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 01:04:42.088325 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:04:42.088375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 01:04:42.088393 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:04:42.088410 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 01:04:42.088426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 01:04:42.088443 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:04:42.088461 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:04:42.088483 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:04:42.088494 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:04:42.088505 kernel: CPU topo: Max. threads per core: 2 Jan 14 01:04:42.088516 kernel: CPU topo: Num. cores per package: 1 Jan 14 01:04:42.088527 kernel: CPU topo: Num. threads per package: 2 Jan 14 01:04:42.088538 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 14 01:04:42.088549 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 14 01:04:42.088559 kernel: Booting paravirtualized kernel on KVM Jan 14 01:04:42.088574 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:04:42.088585 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 01:04:42.088597 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 14 01:04:42.088608 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 14 01:04:42.088618 kernel: pcpu-alloc: [0] 0 1 Jan 14 01:04:42.088629 kernel: kvm-guest: PV spinlocks enabled Jan 14 01:04:42.088640 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:04:42.088656 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:04:42.088668 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 01:04:42.088679 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 01:04:42.088690 kernel: Fallback order for Node 0: 0 Jan 14 01:04:42.088701 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Jan 14 01:04:42.088712 kernel: Policy zone: Normal Jan 14 01:04:42.088723 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:04:42.088737 kernel: software IO TLB: area num 2. Jan 14 01:04:42.088758 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 01:04:42.088773 kernel: Kernel/User page tables isolation: enabled Jan 14 01:04:42.088784 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 01:04:42.088796 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:04:42.088807 kernel: Dynamic Preempt: voluntary Jan 14 01:04:42.088819 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:04:42.088832 kernel: rcu: RCU event tracing is enabled. Jan 14 01:04:42.088844 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 01:04:42.088858 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:04:42.088870 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:04:42.088882 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:04:42.088893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:04:42.088907 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 01:04:42.088919 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:04:42.088931 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:04:42.088943 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:04:42.088955 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 14 01:04:42.088966 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:04:42.088978 kernel: Console: colour dummy device 80x25 Jan 14 01:04:42.088999 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:04:42.089011 kernel: ACPI: Core revision 20240827 Jan 14 01:04:42.089023 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:04:42.089034 kernel: x2apic enabled Jan 14 01:04:42.089046 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:04:42.089057 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 14 01:04:42.089069 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 14 01:04:42.089081 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 14 01:04:42.089095 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 14 01:04:42.089107 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 14 01:04:42.089118 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:04:42.089130 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 14 01:04:42.089142 kernel: Spectre V2 : Mitigation: IBRS Jan 14 01:04:42.089153 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:04:42.089165 kernel: RETBleed: Mitigation: IBRS Jan 14 01:04:42.089180 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 14 01:04:42.089191 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 14 01:04:42.089203 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 14 01:04:42.089214 kernel: MDS: Mitigation: Clear CPU buffers Jan 14 01:04:42.089226 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 01:04:42.089241 kernel: active return thunk: its_return_thunk Jan 14 01:04:42.089252 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 01:04:42.089267 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:04:42.089279 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:04:42.089290 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:04:42.089302 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:04:42.089313 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 14 01:04:42.089325 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:04:42.089354 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:04:42.089376 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:04:42.089404 kernel: landlock: Up and running. Jan 14 01:04:42.089424 kernel: SELinux: Initializing. Jan 14 01:04:42.089444 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:04:42.089464 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:04:42.089484 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 14 01:04:42.089504 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 14 01:04:42.089529 kernel: signal: max sigframe size: 1776 Jan 14 01:04:42.089547 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:04:42.089565 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:04:42.089584 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:04:42.089602 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:04:42.089622 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:04:42.089640 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:04:42.089665 kernel: .... node #0, CPUs: #1 Jan 14 01:04:42.089685 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 14 01:04:42.089708 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 01:04:42.089728 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 01:04:42.089749 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 14 01:04:42.089769 kernel: Memory: 7580388K/7860544K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 274324K reserved, 0K cma-reserved) Jan 14 01:04:42.089792 kernel: devtmpfs: initialized Jan 14 01:04:42.089811 kernel: x86/mm: Memory block size: 128MB Jan 14 01:04:42.089829 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 14 01:04:42.089848 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:04:42.089868 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 01:04:42.089888 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:04:42.089907 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:04:42.089930 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:04:42.089950 kernel: audit: type=2000 audit(1768352678.350:1): state=initialized audit_enabled=0 res=1 Jan 14 01:04:42.089968 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:04:42.089997 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:04:42.090016 kernel: cpuidle: using governor menu Jan 14 01:04:42.090035 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:04:42.090053 kernel: dca service started, version 1.12.1 Jan 14 01:04:42.090072 kernel: PCI: Using configuration type 1 for base access Jan 14 01:04:42.090097 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:04:42.090115 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:04:42.090133 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:04:42.090153 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:04:42.090172 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:04:42.090192 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:04:42.090210 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:04:42.090232 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:04:42.090250 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 14 01:04:42.090268 kernel: ACPI: Interpreter enabled Jan 14 01:04:42.090287 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 01:04:42.090306 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:04:42.090324 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:04:42.090361 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 01:04:42.090385 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 14 01:04:42.090402 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 01:04:42.090745 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 14 01:04:42.091038 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 14 01:04:42.091315 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 14 01:04:42.091358 kernel: PCI host bridge to bus 0000:00 Jan 14 01:04:42.091641 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 01:04:42.091887 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 01:04:42.092192 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 01:04:42.092450 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 14 01:04:42.092686 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 01:04:42.092996 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 14 01:04:42.093279 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jan 14 01:04:42.093616 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jan 14 01:04:42.093871 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 14 01:04:42.094141 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jan 14 01:04:42.094430 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 14 01:04:42.094697 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jan 14 01:04:42.095238 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 01:04:42.095883 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jan 14 01:04:42.096241 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jan 14 01:04:42.096559 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 01:04:42.096847 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jan 14 01:04:42.097133 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jan 14 01:04:42.097159 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 01:04:42.097179 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 01:04:42.097200 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 01:04:42.097219 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 01:04:42.097244 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 14 01:04:42.097265 kernel: iommu: Default domain type: Translated Jan 14 01:04:42.097286 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:04:42.097306 kernel: efivars: Registered efivars operations Jan 14 01:04:42.097327 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:04:42.098387 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 01:04:42.098412 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 14 01:04:42.098440 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 14 01:04:42.098461 kernel: e820: reserve RAM buffer [mem 0xbd2e5000-0xbfffffff] Jan 14 01:04:42.098482 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 14 01:04:42.098502 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 14 01:04:42.098523 kernel: vgaarb: loaded Jan 14 01:04:42.098544 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 01:04:42.098565 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:04:42.098586 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:04:42.098612 kernel: pnp: PnP ACPI init Jan 14 01:04:42.098632 kernel: pnp: PnP ACPI: found 7 devices Jan 14 01:04:42.098654 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:04:42.098674 kernel: NET: Registered PF_INET protocol family Jan 14 01:04:42.098695 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 01:04:42.098715 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 01:04:42.098736 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:04:42.098761 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 01:04:42.098782 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 01:04:42.098803 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 01:04:42.098823 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 01:04:42.098845 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 01:04:42.098865 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:04:42.098885 kernel: NET: Registered PF_XDP protocol family Jan 14 01:04:42.099176 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 01:04:42.099465 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 01:04:42.100488 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 01:04:42.100750 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 14 01:04:42.101170 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 14 01:04:42.101210 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:04:42.101232 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 01:04:42.101252 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 14 01:04:42.101273 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 01:04:42.101292 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 14 01:04:42.101310 kernel: clocksource: Switched to clocksource tsc Jan 14 01:04:42.101331 kernel: Initialise system trusted keyrings Jan 14 01:04:42.101383 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 01:04:42.101403 kernel: Key type asymmetric registered Jan 14 01:04:42.101422 kernel: Asymmetric key parser 'x509' registered Jan 14 01:04:42.101440 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:04:42.101460 kernel: io scheduler mq-deadline registered Jan 14 01:04:42.101479 kernel: io scheduler kyber registered Jan 14 01:04:42.101498 kernel: io scheduler bfq registered Jan 14 01:04:42.101521 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:04:42.101541 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 14 01:04:42.101814 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 14 01:04:42.101839 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 14 01:04:42.102121 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 14 01:04:42.102147 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 14 01:04:42.102436 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 14 01:04:42.102483 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:04:42.102501 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:04:42.102521 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 01:04:42.102540 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 14 01:04:42.102574 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 14 01:04:42.102877 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 14 01:04:42.102920 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 01:04:42.102948 kernel: i8042: Warning: Keylock active Jan 14 01:04:42.102968 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 01:04:42.102997 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 01:04:42.103281 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 14 01:04:42.103578 kernel: rtc_cmos 00:00: registered as rtc0 Jan 14 01:04:42.103834 kernel: rtc_cmos 00:00: setting system clock to 2026-01-14T01:04:40 UTC (1768352680) Jan 14 01:04:42.104104 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 14 01:04:42.104129 kernel: intel_pstate: CPU model not supported Jan 14 01:04:42.104150 kernel: pstore: Using crash dump compression: deflate Jan 14 01:04:42.104170 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 01:04:42.104190 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:04:42.104210 kernel: Segment Routing with IPv6 Jan 14 01:04:42.104228 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:04:42.104251 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:04:42.104360 kernel: Key type dns_resolver registered Jan 14 01:04:42.104381 kernel: IPI shorthand broadcast: enabled Jan 14 01:04:42.104401 kernel: sched_clock: Marking stable (1987005888, 194577697)->(2233255347, -51671762) Jan 14 01:04:42.104449 kernel: registered taskstats version 1 Jan 14 01:04:42.104470 kernel: Loading compiled-in X.509 certificates Jan 14 01:04:42.104490 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e43fcdb17feb86efe6ca4b76910b93467fb95f4f' Jan 14 01:04:42.104614 kernel: Demotion targets for Node 0: null Jan 14 01:04:42.104636 kernel: Key type .fscrypt registered Jan 14 01:04:42.104656 kernel: Key type fscrypt-provisioning registered Jan 14 01:04:42.104676 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:04:42.104696 kernel: ima: Can not allocate sha384 (reason: -2) Jan 14 01:04:42.104716 kernel: ima: No architecture policies found Jan 14 01:04:42.104736 kernel: clk: Disabling unused clocks Jan 14 01:04:42.104760 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:04:42.104780 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:04:42.104799 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 01:04:42.104820 kernel: Run /init as init process Jan 14 01:04:42.104840 kernel: with arguments: Jan 14 01:04:42.104859 kernel: /init Jan 14 01:04:42.104879 kernel: with environment: Jan 14 01:04:42.104898 kernel: HOME=/ Jan 14 01:04:42.104921 kernel: TERM=linux Jan 14 01:04:42.104941 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 01:04:42.104962 kernel: SCSI subsystem initialized Jan 14 01:04:42.105275 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jan 14 01:04:42.105636 kernel: scsi host0: Virtio SCSI HBA Jan 14 01:04:42.105675 kernel: blk-mq: reduced tag depth to 10240 Jan 14 01:04:42.105995 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 14 01:04:42.106208 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 14 01:04:42.106444 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 14 01:04:42.106644 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 14 01:04:42.106836 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 14 01:04:42.107048 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 14 01:04:42.107099 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 01:04:42.107120 kernel: GPT:25804799 != 33554431 Jan 14 01:04:42.107142 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 01:04:42.107162 kernel: GPT:25804799 != 33554431 Jan 14 01:04:42.107183 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 01:04:42.107207 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 01:04:42.107526 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 14 01:04:42.107553 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:04:42.107574 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:04:42.107595 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:04:42.107616 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:04:42.107637 kernel: raid6: avx2x4 gen() 18099 MB/s Jan 14 01:04:42.107663 kernel: raid6: avx2x2 gen() 18196 MB/s Jan 14 01:04:42.107684 kernel: raid6: avx2x1 gen() 13883 MB/s Jan 14 01:04:42.107705 kernel: raid6: using algorithm avx2x2 gen() 18196 MB/s Jan 14 01:04:42.107726 kernel: raid6: .... xor() 18437 MB/s, rmw enabled Jan 14 01:04:42.107747 kernel: raid6: using avx2x2 recovery algorithm Jan 14 01:04:42.107768 kernel: xor: automatically using best checksumming function avx Jan 14 01:04:42.107789 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:04:42.107810 kernel: BTRFS: device fsid cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (155) Jan 14 01:04:42.107835 kernel: BTRFS info (device dm-0): first mount of filesystem cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 Jan 14 01:04:42.107856 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:04:42.107877 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 01:04:42.107897 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:04:42.107918 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:04:42.107939 kernel: loop: module loaded Jan 14 01:04:42.107960 kernel: loop0: detected capacity change from 0 to 100544 Jan 14 01:04:42.107992 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:04:42.108016 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:04:42.108043 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:04:42.108065 systemd[1]: Detected virtualization google. Jan 14 01:04:42.108087 systemd[1]: Detected architecture x86-64. Jan 14 01:04:42.108112 systemd[1]: Running in initrd. Jan 14 01:04:42.108133 systemd[1]: No hostname configured, using default hostname. Jan 14 01:04:42.108155 systemd[1]: Hostname set to . Jan 14 01:04:42.108177 systemd[1]: Initializing machine ID from random generator. Jan 14 01:04:42.108198 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:04:42.108219 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:04:42.108241 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:04:42.108267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:04:42.108291 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:04:42.108313 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:04:42.108353 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:04:42.108376 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:04:42.108403 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:04:42.108426 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:04:42.108447 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:04:42.108470 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:04:42.108495 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:04:42.108520 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:04:42.108543 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:04:42.108565 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:04:42.108585 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:04:42.108604 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:04:42.108713 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:04:42.108755 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:04:42.108785 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:04:42.108818 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:04:42.108841 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:04:42.108862 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:04:42.108884 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:04:42.108920 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:04:42.108947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:04:42.108965 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:04:42.109165 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:04:42.109193 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:04:42.109211 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:04:42.109229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:04:42.109257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:04:42.109277 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:04:42.109295 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:04:42.109317 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:04:42.109400 systemd-journald[291]: Collecting audit messages is enabled. Jan 14 01:04:42.109436 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:04:42.109451 systemd-journald[291]: Journal started Jan 14 01:04:42.109478 systemd-journald[291]: Runtime Journal (/run/log/journal/b3fc0478b8994f5fb2a9eb72c5e28c35) is 8M, max 148.4M, 140.4M free. Jan 14 01:04:42.111389 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:04:42.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.115386 kernel: audit: type=1130 audit(1768352682.110:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.122591 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:04:42.149255 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:04:42.155097 kernel: audit: type=1130 audit(1768352682.147:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.158566 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:04:42.166838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:04:42.177374 kernel: audit: type=1130 audit(1768352682.170:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.177416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:04:42.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.169722 systemd-tmpfiles[304]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:04:42.182093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:04:42.191359 kernel: Bridge firewalling registered Jan 14 01:04:42.193202 systemd-modules-load[293]: Inserted module 'br_netfilter' Jan 14 01:04:42.196631 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:04:42.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.203369 kernel: audit: type=1130 audit(1768352682.197:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.203063 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:04:42.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.210430 kernel: audit: type=1130 audit(1768352682.201:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.215679 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:04:42.220527 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:04:42.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.224383 kernel: audit: type=1130 audit(1768352682.219:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.234624 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:04:42.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.269444 kernel: audit: type=1130 audit(1768352682.244:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.266625 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:04:42.276325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:04:42.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.323363 kernel: audit: type=1130 audit(1768352682.301:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.324000 audit: BPF prog-id=6 op=LOAD Jan 14 01:04:42.333828 kernel: audit: type=1334 audit(1768352682.324:10): prog-id=6 op=LOAD Jan 14 01:04:42.334201 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:04:42.352598 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:04:42.433891 systemd-resolved[331]: Positive Trust Anchors: Jan 14 01:04:42.434508 systemd-resolved[331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:04:42.434517 systemd-resolved[331]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:04:42.434592 systemd-resolved[331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:04:42.474045 systemd-resolved[331]: Defaulting to hostname 'linux'. Jan 14 01:04:42.536533 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:04:42.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.476283 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:04:42.529661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:04:42.569552 kernel: iscsi: registered transport (tcp) Jan 14 01:04:42.594198 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:04:42.594286 kernel: QLogic iSCSI HBA Driver Jan 14 01:04:42.629782 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:04:42.667704 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:04:42.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.670449 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:04:42.765938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:04:42.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.783666 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:04:42.793839 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:04:42.855789 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:04:42.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.857000 audit: BPF prog-id=7 op=LOAD Jan 14 01:04:42.857000 audit: BPF prog-id=8 op=LOAD Jan 14 01:04:42.859797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:04:42.921623 systemd-udevd[559]: Using default interface naming scheme 'v257'. Jan 14 01:04:42.943648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:04:42.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.956583 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:04:42.981951 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:04:42.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:42.993000 audit: BPF prog-id=9 op=LOAD Jan 14 01:04:42.995641 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:04:43.037082 dracut-pre-trigger[652]: rd.md=0: removing MD RAID activation Jan 14 01:04:43.089383 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:04:43.095314 systemd-networkd[669]: lo: Link UP Jan 14 01:04:43.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:43.095322 systemd-networkd[669]: lo: Gained carrier Jan 14 01:04:43.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:43.110811 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:04:43.126271 systemd[1]: Reached target network.target - Network. Jan 14 01:04:43.139796 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:04:43.268698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:04:43.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:43.281265 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:04:43.479364 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:04:43.496408 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 14 01:04:43.524380 kernel: AES CTR mode by8 optimization enabled Jan 14 01:04:43.541362 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 01:04:43.658614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 14 01:04:43.683329 systemd-networkd[669]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:04:43.683365 systemd-networkd[669]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:04:43.686617 systemd-networkd[669]: eth0: Link UP Jan 14 01:04:43.687028 systemd-networkd[669]: eth0: Gained carrier Jan 14 01:04:43.687048 systemd-networkd[669]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:04:43.696442 systemd-networkd[669]: eth0: Overlong DHCP hostname received, shortened from 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3.c.flatcar-212911.internal' to 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:04:43.696463 systemd-networkd[669]: eth0: DHCPv4 address 10.128.0.42/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 14 01:04:43.696901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 14 01:04:43.740327 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 14 01:04:43.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:43.770991 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:04:43.788709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:04:43.788958 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:04:43.886160 disk-uuid[803]: Primary Header is updated. Jan 14 01:04:43.886160 disk-uuid[803]: Secondary Entries is updated. Jan 14 01:04:43.886160 disk-uuid[803]: Secondary Header is updated. Jan 14 01:04:43.826567 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:04:43.836955 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:04:43.952241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:04:43.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:43.976934 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 01:04:43.977010 kernel: audit: type=1130 audit(1768352683.970:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:44.085098 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:04:44.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:44.103707 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:04:44.142680 kernel: audit: type=1130 audit(1768352684.101:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:44.122955 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:04:44.152538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:04:44.171148 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:04:44.227222 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:04:44.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:44.265381 kernel: audit: type=1130 audit(1768352684.243:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:44.811602 systemd-networkd[669]: eth0: Gained IPv6LL Jan 14 01:04:44.948539 disk-uuid[805]: Warning: The kernel is still using the old partition table. Jan 14 01:04:44.948539 disk-uuid[805]: The new table will be used at the next reboot or after you Jan 14 01:04:44.948539 disk-uuid[805]: run partprobe(8) or kpartx(8) Jan 14 01:04:44.948539 disk-uuid[805]: The operation has completed successfully. Jan 14 01:04:45.038574 kernel: audit: type=1130 audit(1768352684.967:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.038625 kernel: audit: type=1131 audit(1768352684.967:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:44.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:44.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:44.958717 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:04:44.958903 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:04:44.971578 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:04:45.091412 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (833) Jan 14 01:04:45.109694 kernel: BTRFS info (device sda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:04:45.109791 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:04:45.128391 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 01:04:45.128491 kernel: BTRFS info (device sda6): turning on async discard Jan 14 01:04:45.128517 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 01:04:45.151423 kernel: BTRFS info (device sda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:04:45.151963 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:04:45.189566 kernel: audit: type=1130 audit(1768352685.160:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.164397 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:04:45.453941 ignition[852]: Ignition 2.24.0 Jan 14 01:04:45.453961 ignition[852]: Stage: fetch-offline Jan 14 01:04:45.495542 kernel: audit: type=1130 audit(1768352685.459:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.459648 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:04:45.454032 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:04:45.483586 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 01:04:45.454052 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 14 01:04:45.455767 ignition[852]: parsed url from cmdline: "" Jan 14 01:04:45.455785 ignition[852]: no config URL provided Jan 14 01:04:45.455799 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:04:45.455828 ignition[852]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:04:45.455846 ignition[852]: failed to fetch config: resource requires networking Jan 14 01:04:45.598530 kernel: audit: type=1130 audit(1768352685.561:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.555893 unknown[859]: fetched base config from "system" Jan 14 01:04:45.457441 ignition[852]: Ignition finished successfully Jan 14 01:04:45.555906 unknown[859]: fetched base config from "system" Jan 14 01:04:45.543449 ignition[859]: Ignition 2.24.0 Jan 14 01:04:45.555916 unknown[859]: fetched user config from "gcp" Jan 14 01:04:45.543458 ignition[859]: Stage: fetch Jan 14 01:04:45.559648 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 01:04:45.543652 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:04:45.686536 kernel: audit: type=1130 audit(1768352685.658:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.565572 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:04:45.543665 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 14 01:04:45.647187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:04:45.543809 ignition[859]: parsed url from cmdline: "" Jan 14 01:04:45.669637 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:04:45.543814 ignition[859]: no config URL provided Jan 14 01:04:45.723885 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:04:45.768571 kernel: audit: type=1130 audit(1768352685.731:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:45.543826 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:04:45.733559 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:04:45.543835 ignition[859]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:04:45.778588 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:04:45.543871 ignition[859]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 14 01:04:45.797563 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:04:45.546949 ignition[859]: GET result: OK Jan 14 01:04:45.814546 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:04:45.547145 ignition[859]: parsing config with SHA512: 3f367eff78805baef22bbd8bbd9d0e35c30236ec3841f376b15379f0effeb79929f318bbe0eab3d3c3599ce548d0881092421db10d54de706a4b32005b7bf625 Jan 14 01:04:45.830579 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:04:45.556566 ignition[859]: fetch: fetch complete Jan 14 01:04:45.849040 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:04:45.556573 ignition[859]: fetch: fetch passed Jan 14 01:04:45.556631 ignition[859]: Ignition finished successfully Jan 14 01:04:45.642576 ignition[865]: Ignition 2.24.0 Jan 14 01:04:45.642584 ignition[865]: Stage: kargs Jan 14 01:04:45.642796 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:04:45.642808 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 14 01:04:45.645058 ignition[865]: kargs: kargs passed Jan 14 01:04:45.645145 ignition[865]: Ignition finished successfully Jan 14 01:04:45.721038 ignition[871]: Ignition 2.24.0 Jan 14 01:04:45.721046 ignition[871]: Stage: disks Jan 14 01:04:45.721242 ignition[871]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:04:45.721253 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 14 01:04:45.722437 ignition[871]: disks: disks passed Jan 14 01:04:45.722501 ignition[871]: Ignition finished successfully Jan 14 01:04:45.934792 systemd-fsck[879]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 01:04:45.996481 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:04:46.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:46.008427 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:04:46.215382 kernel: EXT4-fs (sda9): mounted filesystem 9c98b0a3-27fc-41c4-a169-349b38bd9ceb r/w with ordered data mode. Quota mode: none. Jan 14 01:04:46.216217 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:04:46.224267 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:04:46.233672 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:04:46.279467 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:04:46.290312 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 01:04:46.290425 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:04:46.380563 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (887) Jan 14 01:04:46.380614 kernel: BTRFS info (device sda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:04:46.380642 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:04:46.380667 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 01:04:46.380707 kernel: BTRFS info (device sda6): turning on async discard Jan 14 01:04:46.380732 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 01:04:46.290469 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:04:46.365834 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:04:46.387869 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:04:46.405766 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:04:46.747133 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:04:46.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:46.766506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:04:46.775744 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:04:46.809229 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:04:46.826602 kernel: BTRFS info (device sda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:04:46.850994 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:04:46.859925 ignition[984]: INFO : Ignition 2.24.0 Jan 14 01:04:46.859925 ignition[984]: INFO : Stage: mount Jan 14 01:04:46.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:46.875904 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:04:46.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:46.899818 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:04:46.899818 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 14 01:04:46.899818 ignition[984]: INFO : mount: mount passed Jan 14 01:04:46.899818 ignition[984]: INFO : Ignition finished successfully Jan 14 01:04:46.893135 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:04:46.934605 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:04:46.991379 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (995) Jan 14 01:04:47.009470 kernel: BTRFS info (device sda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:04:47.009605 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:04:47.026643 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 01:04:47.026816 kernel: BTRFS info (device sda6): turning on async discard Jan 14 01:04:47.026844 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 01:04:47.035208 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:04:47.087096 ignition[1012]: INFO : Ignition 2.24.0 Jan 14 01:04:47.087096 ignition[1012]: INFO : Stage: files Jan 14 01:04:47.101505 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:04:47.101505 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 14 01:04:47.101505 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:04:47.101505 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:04:47.101505 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:04:47.101505 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:04:47.101505 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:04:47.101505 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:04:47.099953 unknown[1012]: wrote ssh authorized keys file for user: core Jan 14 01:04:47.197508 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:04:47.197508 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 01:04:47.366433 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:04:47.602607 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:04:47.618535 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 01:04:48.076002 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:04:48.649865 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:04:48.649865 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:04:48.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:48.686626 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:04:48.686626 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:04:48.686626 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:04:48.686626 ignition[1012]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:04:48.686626 ignition[1012]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:04:48.686626 ignition[1012]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:04:48.686626 ignition[1012]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:04:48.686626 ignition[1012]: INFO : files: files passed Jan 14 01:04:48.686626 ignition[1012]: INFO : Ignition finished successfully Jan 14 01:04:48.660807 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:04:48.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:48.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:48.669371 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:04:48.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:48.710808 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:04:48.726141 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:04:48.885701 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:04:48.885701 initrd-setup-root-after-ignition[1042]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:04:48.726273 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:04:48.920591 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:04:48.819025 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:04:48.837941 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:04:48.856741 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:04:48.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:48.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:48.952302 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:04:48.952474 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:04:48.964537 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:04:48.983798 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:04:48.993430 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:04:48.994910 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:04:49.075510 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:04:49.119655 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 14 01:04:49.119709 kernel: audit: type=1130 audit(1768352689.074:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.078130 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:04:49.157155 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:04:49.157765 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:04:49.167947 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:04:49.186980 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:04:49.206016 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:04:49.256558 kernel: audit: type=1131 audit(1768352689.218:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.206232 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:04:49.257007 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:04:49.276938 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:04:49.294917 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:04:49.302948 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:04:49.339686 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:04:49.357695 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:04:49.375761 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:04:49.376185 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:04:49.392031 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:04:49.427742 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:04:49.428169 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:04:49.456702 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:04:49.457159 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:04:49.502555 kernel: audit: type=1131 audit(1768352689.472:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.502869 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:04:49.503272 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:04:49.520890 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:04:49.521069 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:04:49.597592 kernel: audit: type=1131 audit(1768352689.564:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.539962 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:04:49.540171 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:04:49.645629 kernel: audit: type=1131 audit(1768352689.604:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.597966 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:04:49.683627 kernel: audit: type=1131 audit(1768352689.653:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.598224 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:04:49.606066 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:04:49.606411 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:04:49.721681 ignition[1067]: INFO : Ignition 2.24.0 Jan 14 01:04:49.721681 ignition[1067]: INFO : Stage: umount Jan 14 01:04:49.721681 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:04:49.721681 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 14 01:04:49.721681 ignition[1067]: INFO : umount: umount passed Jan 14 01:04:49.721681 ignition[1067]: INFO : Ignition finished successfully Jan 14 01:04:49.849714 kernel: audit: type=1131 audit(1768352689.737:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.849780 kernel: audit: type=1131 audit(1768352689.780:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.849800 kernel: audit: type=1131 audit(1768352689.809:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.657679 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:04:49.694158 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:04:49.907589 kernel: audit: type=1131 audit(1768352689.879:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.712979 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:04:49.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.713276 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:04:49.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.738914 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:04:49.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.739207 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:04:49.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.781959 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:04:49.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.782261 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:04:50.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.866589 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:04:49.868133 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:04:49.868285 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:04:49.881543 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:04:49.881717 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:04:49.920958 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:04:49.921110 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:04:49.938272 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:04:50.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.938366 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:04:50.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.954714 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:04:50.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.954817 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:04:49.972615 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 01:04:49.972764 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 01:04:50.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:49.990627 systemd[1]: Stopped target network.target - Network. Jan 14 01:04:50.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.257000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:04:50.257000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:04:49.990749 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:04:49.990830 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:04:50.015676 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:04:50.031544 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:04:50.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.035502 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:04:50.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.049547 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:04:50.063553 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:04:50.078612 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:04:50.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.078742 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:04:50.095619 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:04:50.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.095709 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:04:50.112615 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:04:50.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.112684 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:04:50.130611 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:04:50.130742 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:04:50.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.149639 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:04:50.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.149763 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:04:50.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.167686 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:04:50.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.167806 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:04:50.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.184807 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:04:50.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.203648 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:04:50.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:50.220209 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:04:50.220375 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:04:50.239263 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:04:50.239426 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:04:50.259099 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:04:50.271633 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:04:50.271723 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:04:50.282257 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:04:50.307520 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:04:50.743546 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). Jan 14 01:04:50.307707 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:04:50.325721 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:04:50.325840 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:04:50.326003 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:04:50.326072 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:04:50.326212 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:04:50.343629 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:04:50.343809 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:04:50.373778 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:04:50.373931 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:04:50.392715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:04:50.392800 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:04:50.416577 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:04:50.416702 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:04:50.432998 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:04:50.433114 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:04:50.456782 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:04:50.456894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:04:50.485112 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:04:50.508489 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:04:50.508614 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:04:50.518781 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:04:50.518869 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:04:50.536769 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 01:04:50.536860 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:04:50.555747 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:04:50.555834 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:04:50.574735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:04:50.574830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:04:50.594808 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:04:50.594955 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:04:50.612100 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:04:50.612229 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:04:50.632473 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:04:50.650959 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:04:50.694435 systemd[1]: Switching root. Jan 14 01:04:51.075544 systemd-journald[291]: Journal stopped Jan 14 01:04:53.738124 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:04:53.738182 kernel: SELinux: policy capability open_perms=1 Jan 14 01:04:53.738223 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:04:53.738244 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:04:53.738264 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:04:53.738283 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:04:53.738306 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:04:53.738331 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:04:53.738383 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:04:53.738406 systemd[1]: Successfully loaded SELinux policy in 115.416ms. Jan 14 01:04:53.738432 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.943ms. Jan 14 01:04:53.738455 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:04:53.738478 systemd[1]: Detected virtualization google. Jan 14 01:04:53.738506 systemd[1]: Detected architecture x86-64. Jan 14 01:04:53.738530 systemd[1]: Detected first boot. Jan 14 01:04:53.738553 systemd[1]: Initializing machine ID from random generator. Jan 14 01:04:53.738577 zram_generator::config[1109]: No configuration found. Jan 14 01:04:53.738608 kernel: Guest personality initialized and is inactive Jan 14 01:04:53.738628 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 01:04:53.738650 kernel: Initialized host personality Jan 14 01:04:53.738675 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:04:53.738697 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:04:53.738722 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:04:53.738744 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:04:53.738772 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:04:53.738803 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:04:53.738827 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:04:53.738850 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:04:53.738874 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:04:53.738902 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:04:53.738926 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:04:53.738950 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:04:53.738974 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:04:53.738997 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:04:53.739022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:04:53.739044 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:04:53.739073 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:04:53.739097 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:04:53.739121 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:04:53.739146 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:04:53.739176 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:04:53.739214 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:04:53.739243 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:04:53.739266 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:04:53.739291 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:04:53.739315 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:04:53.739387 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:04:53.739411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:04:53.739437 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:04:53.740878 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:04:53.740905 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:04:53.740930 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:04:53.740952 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:04:53.740974 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:04:53.741003 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:04:53.741026 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:04:53.741050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:04:53.741073 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:04:53.741095 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:04:53.741147 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:04:53.741170 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:04:53.741195 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:04:53.741227 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:04:53.741250 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:04:53.741274 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:04:53.741297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:04:53.741325 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:04:53.741929 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:04:53.741970 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:04:53.741994 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:04:53.742018 systemd[1]: Reached target machines.target - Containers. Jan 14 01:04:53.742041 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:04:53.742073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:04:53.742097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:04:53.742122 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:04:53.742147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:04:53.742172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:04:53.742197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:04:53.742290 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:04:53.742321 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:04:53.742380 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:04:53.742406 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:04:53.742431 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:04:53.742456 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:04:53.742480 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:04:53.742505 kernel: fuse: init (API version 7.41) Jan 14 01:04:53.742537 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:04:53.742562 kernel: ACPI: bus type drm_connector registered Jan 14 01:04:53.742586 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:04:53.742611 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:04:53.742636 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:04:53.742709 systemd-journald[1198]: Collecting audit messages is enabled. Jan 14 01:04:53.742765 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:04:53.742789 systemd-journald[1198]: Journal started Jan 14 01:04:53.742835 systemd-journald[1198]: Runtime Journal (/run/log/journal/434c21c256dd430392043e3d44fbfbcd) is 8M, max 148.4M, 140.4M free. Jan 14 01:04:53.032000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:04:53.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.647000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:04:53.647000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:04:53.650000 audit: BPF prog-id=15 op=LOAD Jan 14 01:04:53.655000 audit: BPF prog-id=16 op=LOAD Jan 14 01:04:53.655000 audit: BPF prog-id=17 op=LOAD Jan 14 01:04:53.731000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:04:53.731000 audit[1198]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff5bfda300 a2=4000 a3=0 items=0 ppid=1 pid=1198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:53.731000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:04:52.428011 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:04:52.448541 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 01:04:52.449418 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:04:53.767378 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:04:53.790380 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:04:53.820384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:04:53.836397 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:04:53.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.847229 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:04:53.856785 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:04:53.867797 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:04:53.876874 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:04:53.887785 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:04:53.896733 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:04:53.906058 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:04:53.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.917100 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:04:53.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.927957 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:04:53.928246 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:04:53.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.939119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:04:53.939447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:04:53.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.951050 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:04:53.951384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:04:53.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.961057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:04:53.961395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:04:53.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.972983 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:04:53.973277 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:04:53.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.982991 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:04:53.983256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:04:53.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:53.993312 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:04:54.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.003289 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:04:54.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.014958 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:04:54.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.026127 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:04:54.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.038009 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:04:54.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.061807 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:04:54.071963 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:04:54.084105 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:04:54.101538 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:04:54.110611 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:04:54.110861 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:04:54.120813 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:04:54.131759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:04:54.132017 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:04:54.134551 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:04:54.154602 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:04:54.165768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:04:54.172221 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:04:54.181666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:04:54.183519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:04:54.193256 systemd-journald[1198]: Time spent on flushing to /var/log/journal/434c21c256dd430392043e3d44fbfbcd is 102.470ms for 1090 entries. Jan 14 01:04:54.193256 systemd-journald[1198]: System Journal (/var/log/journal/434c21c256dd430392043e3d44fbfbcd) is 8M, max 588.1M, 580.1M free. Jan 14 01:04:54.324702 systemd-journald[1198]: Received client request to flush runtime journal. Jan 14 01:04:54.324789 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 01:04:54.324823 kernel: kauditd_printk_skb: 76 callbacks suppressed Jan 14 01:04:54.324854 kernel: audit: type=1130 audit(1768352694.279:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.209554 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:04:54.223669 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:04:54.236987 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:04:54.250296 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:04:54.264446 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:04:54.281902 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:04:54.328759 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:04:54.339513 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 14 01:04:54.339549 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 14 01:04:54.342719 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:04:54.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.376405 kernel: audit: type=1130 audit(1768352694.352:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.376394 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:04:54.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.401420 kernel: audit: type=1130 audit(1768352694.375:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.401761 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:04:54.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.455264 kernel: audit: type=1130 audit(1768352694.428:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.456653 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:04:54.473481 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:04:54.514569 kernel: audit: type=1130 audit(1768352694.483:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.514730 kernel: loop2: detected capacity change from 0 to 229808 Jan 14 01:04:54.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.559589 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:04:54.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.574613 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:04:54.595464 kernel: audit: type=1130 audit(1768352694.567:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.567000 audit: BPF prog-id=18 op=LOAD Jan 14 01:04:54.613858 kernel: audit: type=1334 audit(1768352694.567:134): prog-id=18 op=LOAD Jan 14 01:04:54.614003 kernel: audit: type=1334 audit(1768352694.567:135): prog-id=19 op=LOAD Jan 14 01:04:54.614046 kernel: audit: type=1334 audit(1768352694.567:136): prog-id=20 op=LOAD Jan 14 01:04:54.567000 audit: BPF prog-id=19 op=LOAD Jan 14 01:04:54.567000 audit: BPF prog-id=20 op=LOAD Jan 14 01:04:54.624000 audit: BPF prog-id=21 op=LOAD Jan 14 01:04:54.630650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:04:54.633388 kernel: audit: type=1334 audit(1768352694.624:137): prog-id=21 op=LOAD Jan 14 01:04:54.646603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:04:54.650369 kernel: loop3: detected capacity change from 0 to 99320 Jan 14 01:04:54.666000 audit: BPF prog-id=22 op=LOAD Jan 14 01:04:54.672000 audit: BPF prog-id=23 op=LOAD Jan 14 01:04:54.672000 audit: BPF prog-id=24 op=LOAD Jan 14 01:04:54.677384 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:04:54.686000 audit: BPF prog-id=25 op=LOAD Jan 14 01:04:54.686000 audit: BPF prog-id=26 op=LOAD Jan 14 01:04:54.686000 audit: BPF prog-id=27 op=LOAD Jan 14 01:04:54.690617 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:04:54.716641 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 14 01:04:54.717183 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 14 01:04:54.730522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:04:54.746363 kernel: loop4: detected capacity change from 0 to 50784 Jan 14 01:04:54.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.826162 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:04:54.832065 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 01:04:54.848089 systemd-nsresourced[1258]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:04:54.853914 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:04:54.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.869004 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:04:54.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:54.886490 kernel: loop6: detected capacity change from 0 to 229808 Jan 14 01:04:54.928382 kernel: loop7: detected capacity change from 0 to 99320 Jan 14 01:04:54.978940 kernel: loop1: detected capacity change from 0 to 50784 Jan 14 01:04:55.020999 (sd-merge)[1264]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-gce.raw'. Jan 14 01:04:55.041779 (sd-merge)[1264]: Merged extensions into '/usr'. Jan 14 01:04:55.058564 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:04:55.058589 systemd[1]: Reloading... Jan 14 01:04:55.188667 systemd-oomd[1253]: No swap; memory pressure usage will be degraded Jan 14 01:04:55.229330 systemd-resolved[1255]: Positive Trust Anchors: Jan 14 01:04:55.229374 systemd-resolved[1255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:04:55.229382 systemd-resolved[1255]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:04:55.229444 systemd-resolved[1255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:04:55.245367 zram_generator::config[1308]: No configuration found. Jan 14 01:04:55.260358 systemd-resolved[1255]: Defaulting to hostname 'linux'. Jan 14 01:04:55.619991 systemd[1]: Reloading finished in 559 ms. Jan 14 01:04:55.654472 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:04:55.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:55.664858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:04:55.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:55.674086 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:04:55.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:55.685117 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:04:55.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:55.700462 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:04:55.731269 systemd[1]: Starting ensure-sysext.service... Jan 14 01:04:55.745555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:04:55.754000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:04:55.754000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:04:55.755000 audit: BPF prog-id=28 op=LOAD Jan 14 01:04:55.755000 audit: BPF prog-id=29 op=LOAD Jan 14 01:04:55.758069 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:04:55.768000 audit: BPF prog-id=30 op=LOAD Jan 14 01:04:55.770000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:04:55.770000 audit: BPF prog-id=31 op=LOAD Jan 14 01:04:55.770000 audit: BPF prog-id=32 op=LOAD Jan 14 01:04:55.770000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:04:55.770000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:04:55.771000 audit: BPF prog-id=33 op=LOAD Jan 14 01:04:55.771000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:04:55.771000 audit: BPF prog-id=34 op=LOAD Jan 14 01:04:55.771000 audit: BPF prog-id=35 op=LOAD Jan 14 01:04:55.772000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:04:55.772000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:04:55.773000 audit: BPF prog-id=36 op=LOAD Jan 14 01:04:55.774000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:04:55.779000 audit: BPF prog-id=37 op=LOAD Jan 14 01:04:55.779000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:04:55.779000 audit: BPF prog-id=38 op=LOAD Jan 14 01:04:55.780000 audit: BPF prog-id=39 op=LOAD Jan 14 01:04:55.780000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:04:55.780000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:04:55.781000 audit: BPF prog-id=40 op=LOAD Jan 14 01:04:55.782000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:04:55.782000 audit: BPF prog-id=41 op=LOAD Jan 14 01:04:55.782000 audit: BPF prog-id=42 op=LOAD Jan 14 01:04:55.782000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:04:55.782000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:04:55.786267 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:04:55.786689 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:04:55.787074 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:04:55.788902 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Jan 14 01:04:55.789054 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Jan 14 01:04:55.800349 systemd[1]: Reload requested from client PID 1348 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:04:55.801093 systemd[1]: Reloading... Jan 14 01:04:55.812914 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:04:55.812952 systemd-tmpfiles[1349]: Skipping /boot Jan 14 01:04:55.835964 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:04:55.835988 systemd-tmpfiles[1349]: Skipping /boot Jan 14 01:04:55.867436 systemd-udevd[1350]: Using default interface naming scheme 'v257'. Jan 14 01:04:55.953378 zram_generator::config[1382]: No configuration found. Jan 14 01:04:56.211432 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:04:56.274374 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 01:04:56.302363 kernel: ACPI: button: Power Button [PWRF] Jan 14 01:04:56.316369 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 14 01:04:56.325370 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 14 01:04:56.337375 kernel: ACPI: button: Sleep Button [SLPF] Jan 14 01:04:56.413482 kernel: EDAC MC: Ver: 3.0.0 Jan 14 01:04:56.682421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 14 01:04:56.693547 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:04:56.693699 systemd[1]: Reloading finished in 891 ms. Jan 14 01:04:56.710926 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:04:56.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:56.725000 audit: BPF prog-id=43 op=LOAD Jan 14 01:04:56.725000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:04:56.725000 audit: BPF prog-id=44 op=LOAD Jan 14 01:04:56.725000 audit: BPF prog-id=45 op=LOAD Jan 14 01:04:56.725000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:04:56.725000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:04:56.726000 audit: BPF prog-id=46 op=LOAD Jan 14 01:04:56.726000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:04:56.726000 audit: BPF prog-id=47 op=LOAD Jan 14 01:04:56.726000 audit: BPF prog-id=48 op=LOAD Jan 14 01:04:56.726000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:04:56.726000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:04:56.729000 audit: BPF prog-id=49 op=LOAD Jan 14 01:04:56.729000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:04:56.729000 audit: BPF prog-id=50 op=LOAD Jan 14 01:04:56.729000 audit: BPF prog-id=51 op=LOAD Jan 14 01:04:56.729000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:04:56.729000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:04:56.730000 audit: BPF prog-id=52 op=LOAD Jan 14 01:04:56.730000 audit: BPF prog-id=53 op=LOAD Jan 14 01:04:56.730000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:04:56.730000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:04:56.733000 audit: BPF prog-id=54 op=LOAD Jan 14 01:04:56.733000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:04:56.735000 audit: BPF prog-id=55 op=LOAD Jan 14 01:04:56.735000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:04:56.738000 audit: BPF prog-id=56 op=LOAD Jan 14 01:04:56.738000 audit: BPF prog-id=57 op=LOAD Jan 14 01:04:56.738000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:04:56.738000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:04:56.749187 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:04:56.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:56.796447 systemd[1]: Finished ensure-sysext.service. Jan 14 01:04:56.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:04:56.829838 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jan 14 01:04:56.838715 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:04:56.840575 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:04:56.860261 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:04:56.871949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:04:56.875715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:04:56.887843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:04:56.899110 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:04:56.912398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:04:56.926068 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 14 01:04:56.933804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:04:56.934060 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:04:56.942146 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:04:56.953500 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:04:56.956261 augenrules[1496]: No rules Jan 14 01:04:56.954000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:04:56.954000 audit[1496]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcfb1d8a70 a2=420 a3=0 items=0 ppid=1469 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:56.954000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:04:56.963535 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:04:56.966515 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:04:56.989558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:04:56.998541 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:04:57.012709 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:04:57.016633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:04:57.017039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:04:57.022204 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:04:57.022686 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:04:57.023258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:04:57.024758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:04:57.025291 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:04:57.026489 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:04:57.027013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:04:57.028382 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:04:57.028963 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:04:57.029289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:04:57.051273 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:04:57.051506 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:04:57.083378 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:04:57.115960 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 14 01:04:57.120186 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:04:57.127950 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 14 01:04:57.162630 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:04:57.183898 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:04:57.184948 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:04:57.248205 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 14 01:04:57.254885 systemd-networkd[1503]: lo: Link UP Jan 14 01:04:57.255422 systemd-networkd[1503]: lo: Gained carrier Jan 14 01:04:57.259610 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:04:57.260150 systemd-networkd[1503]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:04:57.260163 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:04:57.260251 systemd[1]: Reached target network.target - Network. Jan 14 01:04:57.263251 systemd-networkd[1503]: eth0: Link UP Jan 14 01:04:57.263611 systemd-networkd[1503]: eth0: Gained carrier Jan 14 01:04:57.263642 systemd-networkd[1503]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:04:57.265590 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:04:57.269428 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:04:57.275431 systemd-networkd[1503]: eth0: Overlong DHCP hostname received, shortened from 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3.c.flatcar-212911.internal' to 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:04:57.275464 systemd-networkd[1503]: eth0: DHCPv4 address 10.128.0.42/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 14 01:04:57.298845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:04:57.422878 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:04:57.925307 ldconfig[1493]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:04:57.932206 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:04:57.943678 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:04:57.969352 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:04:57.978856 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:04:57.987691 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:04:57.998603 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:04:58.008535 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:04:58.018754 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:04:58.028688 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:04:58.038666 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:04:58.049789 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:04:58.058585 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:04:58.069547 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:04:58.069622 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:04:58.077552 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:04:58.086626 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:04:58.097365 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:04:58.106878 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:04:58.117784 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:04:58.128538 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:04:58.148452 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:04:58.158156 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:04:58.169499 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:04:58.179729 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:04:58.188522 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:04:58.196632 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:04:58.196682 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:04:58.198451 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:04:58.219997 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 01:04:58.235875 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:04:58.251120 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:04:58.277650 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:04:58.289646 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:04:58.299819 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:04:58.308628 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:04:58.319719 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:04:58.331428 jq[1550]: false Jan 14 01:04:58.332297 systemd[1]: Started ntpd.service - Network Time Service. Jan 14 01:04:58.345023 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:04:58.351995 extend-filesystems[1553]: Found /dev/sda6 Jan 14 01:04:58.359136 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:04:58.371367 extend-filesystems[1553]: Found /dev/sda9 Jan 14 01:04:58.380070 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jan 14 01:04:58.378918 oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jan 14 01:04:58.380887 coreos-metadata[1547]: Jan 14 01:04:58.380 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 14 01:04:58.381630 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:04:58.384000 coreos-metadata[1547]: Jan 14 01:04:58.382 INFO Fetch successful Jan 14 01:04:58.384000 coreos-metadata[1547]: Jan 14 01:04:58.382 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 14 01:04:58.384000 coreos-metadata[1547]: Jan 14 01:04:58.383 INFO Fetch successful Jan 14 01:04:58.384000 coreos-metadata[1547]: Jan 14 01:04:58.383 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 14 01:04:58.389970 extend-filesystems[1553]: Checking size of /dev/sda9 Jan 14 01:04:58.397614 coreos-metadata[1547]: Jan 14 01:04:58.384 INFO Fetch successful Jan 14 01:04:58.397614 coreos-metadata[1547]: Jan 14 01:04:58.384 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 14 01:04:58.397614 coreos-metadata[1547]: Jan 14 01:04:58.387 INFO Fetch successful Jan 14 01:04:58.400728 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:04:58.401818 oslogin_cache_refresh[1554]: Failure getting users, quitting Jan 14 01:04:58.403808 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting Jan 14 01:04:58.403808 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:04:58.403808 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache Jan 14 01:04:58.401880 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:04:58.401964 oslogin_cache_refresh[1554]: Refreshing group entry cache Jan 14 01:04:58.408768 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting Jan 14 01:04:58.408768 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:04:58.408539 oslogin_cache_refresh[1554]: Failure getting groups, quitting Jan 14 01:04:58.408560 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:04:58.409541 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 14 01:04:58.410604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:04:58.412944 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:04:58.427557 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:04:58.447437 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:04:58.458308 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:04:58.463560 extend-filesystems[1553]: Resized partition /dev/sda9 Jan 14 01:04:58.479217 jq[1575]: true Jan 14 01:04:58.458792 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:04:58.481129 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:47:10 UTC 2026 (1): Starting Jan 14 01:04:58.481129 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 14 01:04:58.481129 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: ---------------------------------------------------- Jan 14 01:04:58.481129 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: ntp-4 is maintained by Network Time Foundation, Jan 14 01:04:58.481129 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 14 01:04:58.481129 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: corporation. Support and training for ntp-4 are Jan 14 01:04:58.481129 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: available at https://www.nwtime.org/support Jan 14 01:04:58.481129 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: ---------------------------------------------------- Jan 14 01:04:58.478517 ntpd[1559]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:47:10 UTC 2026 (1): Starting Jan 14 01:04:58.459295 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:04:58.478584 ntpd[1559]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 14 01:04:58.461225 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:04:58.478595 ntpd[1559]: ---------------------------------------------------- Jan 14 01:04:58.492107 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: proto: precision = 0.069 usec (-24) Jan 14 01:04:58.492107 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: basedate set to 2026-01-01 Jan 14 01:04:58.492107 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: gps base set to 2026-01-04 (week 2400) Jan 14 01:04:58.471235 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:04:58.478605 ntpd[1559]: ntp-4 is maintained by Network Time Foundation, Jan 14 01:04:58.472784 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:04:58.478614 ntpd[1559]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 14 01:04:58.492481 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:04:58.478623 ntpd[1559]: corporation. Support and training for ntp-4 are Jan 14 01:04:58.478633 ntpd[1559]: available at https://www.nwtime.org/support Jan 14 01:04:58.478642 ntpd[1559]: ---------------------------------------------------- Jan 14 01:04:58.497429 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: Listen and drop on 0 v6wildcard [::]:123 Jan 14 01:04:58.497429 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 14 01:04:58.497429 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: Listen normally on 2 lo 127.0.0.1:123 Jan 14 01:04:58.497429 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: Listen normally on 3 eth0 10.128.0.42:123 Jan 14 01:04:58.497429 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: Listen normally on 4 lo [::1]:123 Jan 14 01:04:58.497429 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: bind(21) AF_INET6 [fe80::4001:aff:fe80:2a%2]:123 flags 0x811 failed: Cannot assign requested address Jan 14 01:04:58.497429 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:2a%2]:123 Jan 14 01:04:58.497429 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: cannot bind address fe80::4001:aff:fe80:2a%2 Jan 14 01:04:58.497429 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: Listening on routing socket on fd #21 for interface updates Jan 14 01:04:58.492881 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:04:58.487811 ntpd[1559]: proto: precision = 0.069 usec (-24) Jan 14 01:04:58.505528 extend-filesystems[1588]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:04:58.488221 ntpd[1559]: basedate set to 2026-01-01 Jan 14 01:04:58.488242 ntpd[1559]: gps base set to 2026-01-04 (week 2400) Jan 14 01:04:58.493456 ntpd[1559]: Listen and drop on 0 v6wildcard [::]:123 Jan 14 01:04:58.493512 ntpd[1559]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 14 01:04:58.495500 ntpd[1559]: Listen normally on 2 lo 127.0.0.1:123 Jan 14 01:04:58.495546 ntpd[1559]: Listen normally on 3 eth0 10.128.0.42:123 Jan 14 01:04:58.495592 ntpd[1559]: Listen normally on 4 lo [::1]:123 Jan 14 01:04:58.495633 ntpd[1559]: bind(21) AF_INET6 [fe80::4001:aff:fe80:2a%2]:123 flags 0x811 failed: Cannot assign requested address Jan 14 01:04:58.495663 ntpd[1559]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:2a%2]:123 Jan 14 01:04:58.495685 ntpd[1559]: cannot bind address fe80::4001:aff:fe80:2a%2 Jan 14 01:04:58.495725 ntpd[1559]: Listening on routing socket on fd #21 for interface updates Jan 14 01:04:58.523689 ntpd[1559]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 01:04:58.530572 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 01:04:58.530572 ntpd[1559]: 14 Jan 01:04:58 ntpd[1559]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 01:04:58.523752 ntpd[1559]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 01:04:58.550226 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2604027 blocks Jan 14 01:04:58.547155 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 01:04:58.563063 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:04:58.593002 update_engine[1573]: I20260114 01:04:58.592492 1573 main.cc:92] Flatcar Update Engine starting Jan 14 01:04:58.615967 jq[1593]: true Jan 14 01:04:58.660371 kernel: EXT4-fs (sda9): resized filesystem to 2604027 Jan 14 01:04:58.703522 tar[1591]: linux-amd64/LICENSE Jan 14 01:04:58.704826 tar[1591]: linux-amd64/helm Jan 14 01:04:58.708554 extend-filesystems[1588]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 14 01:04:58.708554 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 14 01:04:58.708554 extend-filesystems[1588]: The filesystem on /dev/sda9 is now 2604027 (4k) blocks long. Jan 14 01:04:58.746983 extend-filesystems[1553]: Resized filesystem in /dev/sda9 Jan 14 01:04:58.711300 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:04:58.712168 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:04:58.811872 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:04:58.816387 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:04:58.835734 systemd[1]: Starting sshkeys.service... Jan 14 01:04:58.888370 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 01:04:58.902464 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 01:04:58.963318 dbus-daemon[1548]: [system] SELinux support is enabled Jan 14 01:04:58.965375 systemd-logind[1572]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 01:04:58.965413 systemd-logind[1572]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 14 01:04:58.965446 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 01:04:58.969523 systemd-logind[1572]: New seat seat0. Jan 14 01:04:58.971577 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:04:58.986004 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:04:58.989726 dbus-daemon[1548]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1503 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 14 01:04:59.018446 update_engine[1573]: I20260114 01:04:59.016763 1573 update_check_scheduler.cc:74] Next update check in 11m53s Jan 14 01:04:59.048705 dbus-daemon[1548]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 01:04:59.050195 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:04:59.064723 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:04:59.065226 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:04:59.073856 coreos-metadata[1631]: Jan 14 01:04:59.073 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 14 01:04:59.080262 coreos-metadata[1631]: Jan 14 01:04:59.080 INFO Fetch failed with 404: resource not found Jan 14 01:04:59.080262 coreos-metadata[1631]: Jan 14 01:04:59.080 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 14 01:04:59.080262 coreos-metadata[1631]: Jan 14 01:04:59.080 INFO Fetch successful Jan 14 01:04:59.080262 coreos-metadata[1631]: Jan 14 01:04:59.080 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 14 01:04:59.080262 coreos-metadata[1631]: Jan 14 01:04:59.080 INFO Fetch failed with 404: resource not found Jan 14 01:04:59.080262 coreos-metadata[1631]: Jan 14 01:04:59.080 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 14 01:04:59.080262 coreos-metadata[1631]: Jan 14 01:04:59.080 INFO Fetch failed with 404: resource not found Jan 14 01:04:59.080262 coreos-metadata[1631]: Jan 14 01:04:59.080 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 14 01:04:59.080262 coreos-metadata[1631]: Jan 14 01:04:59.080 INFO Fetch successful Jan 14 01:04:59.081495 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 14 01:04:59.090551 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:04:59.091013 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:04:59.092663 unknown[1631]: wrote ssh authorized keys file for user: core Jan 14 01:04:59.108670 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:04:59.176836 update-ssh-keys[1643]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:04:59.184248 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 01:04:59.216467 systemd[1]: Finished sshkeys.service. Jan 14 01:04:59.275532 systemd-networkd[1503]: eth0: Gained IPv6LL Jan 14 01:04:59.287995 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:04:59.299171 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:04:59.314414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:04:59.332013 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:04:59.348518 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 14 01:04:59.451628 init.sh[1650]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 14 01:04:59.452979 init.sh[1650]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 14 01:04:59.461072 init.sh[1650]: + /usr/bin/google_instance_setup Jan 14 01:04:59.578123 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:04:59.619705 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 14 01:04:59.623139 dbus-daemon[1548]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 14 01:04:59.626783 dbus-daemon[1548]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1640 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 14 01:04:59.641875 systemd[1]: Starting polkit.service - Authorization Manager... Jan 14 01:04:59.706817 locksmithd[1642]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:04:59.787828 containerd[1606]: time="2026-01-14T01:04:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:04:59.788236 containerd[1606]: time="2026-01-14T01:04:59.787907041Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:04:59.862327 containerd[1606]: time="2026-01-14T01:04:59.862201323Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.77µs" Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.862503463Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.862581062Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.862604305Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.862826340Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.862877215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.862973423Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.862994402Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.863266236Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.863292410Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:04:59.863364 containerd[1606]: time="2026-01-14T01:04:59.863312806Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:04:59.866453 containerd[1606]: time="2026-01-14T01:04:59.863328198Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:04:59.866453 containerd[1606]: time="2026-01-14T01:04:59.865656677Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:04:59.866453 containerd[1606]: time="2026-01-14T01:04:59.865682666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:04:59.866453 containerd[1606]: time="2026-01-14T01:04:59.865817277Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:04:59.866453 containerd[1606]: time="2026-01-14T01:04:59.866237619Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:04:59.866453 containerd[1606]: time="2026-01-14T01:04:59.866295422Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:04:59.866453 containerd[1606]: time="2026-01-14T01:04:59.866314463Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:04:59.869369 containerd[1606]: time="2026-01-14T01:04:59.869084704Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:04:59.871857 containerd[1606]: time="2026-01-14T01:04:59.871388621Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:04:59.871857 containerd[1606]: time="2026-01-14T01:04:59.871560748Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.882572151Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.882768661Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883061830Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883105667Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883128631Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883148633Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883185803Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883203143Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883221899Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883240849Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883277924Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883295330Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883311891Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:04:59.883441 containerd[1606]: time="2026-01-14T01:04:59.883355890Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:04:59.884599 containerd[1606]: time="2026-01-14T01:04:59.884562653Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:04:59.885655 containerd[1606]: time="2026-01-14T01:04:59.885413852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:04:59.885655 containerd[1606]: time="2026-01-14T01:04:59.885514683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:04:59.885655 containerd[1606]: time="2026-01-14T01:04:59.885541460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:04:59.885655 containerd[1606]: time="2026-01-14T01:04:59.885585953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:04:59.885655 containerd[1606]: time="2026-01-14T01:04:59.885605829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:04:59.886578 containerd[1606]: time="2026-01-14T01:04:59.886538843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:04:59.890370 containerd[1606]: time="2026-01-14T01:04:59.887884945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:04:59.890370 containerd[1606]: time="2026-01-14T01:04:59.887938657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:04:59.890370 containerd[1606]: time="2026-01-14T01:04:59.887970792Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:04:59.890370 containerd[1606]: time="2026-01-14T01:04:59.887998487Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:04:59.890370 containerd[1606]: time="2026-01-14T01:04:59.888058363Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:04:59.890370 containerd[1606]: time="2026-01-14T01:04:59.888139432Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:04:59.890370 containerd[1606]: time="2026-01-14T01:04:59.888168803Z" level=info msg="Start snapshots syncer" Jan 14 01:04:59.890370 containerd[1606]: time="2026-01-14T01:04:59.888212060Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:04:59.904081 containerd[1606]: time="2026-01-14T01:04:59.903553751Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:04:59.906662 containerd[1606]: time="2026-01-14T01:04:59.904437483Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:04:59.906662 containerd[1606]: time="2026-01-14T01:04:59.905833018Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:04:59.906662 containerd[1606]: time="2026-01-14T01:04:59.906024969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:04:59.906662 containerd[1606]: time="2026-01-14T01:04:59.906068221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:04:59.906662 containerd[1606]: time="2026-01-14T01:04:59.906086443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:04:59.906662 containerd[1606]: time="2026-01-14T01:04:59.906103504Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:04:59.906662 containerd[1606]: time="2026-01-14T01:04:59.906132286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:04:59.906662 containerd[1606]: time="2026-01-14T01:04:59.906153852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908400087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908449686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908473148Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908530926Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908554223Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908570165Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908587854Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908601747Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908618217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908635358Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908693500Z" level=info msg="runtime interface created" Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908703550Z" level=info msg="created NRI interface" Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908717578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908743732Z" level=info msg="Connect containerd service" Jan 14 01:04:59.909372 containerd[1606]: time="2026-01-14T01:04:59.908777243Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:04:59.916748 containerd[1606]: time="2026-01-14T01:04:59.915717917Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:04:59.996156 polkitd[1666]: Started polkitd version 126 Jan 14 01:05:00.025244 polkitd[1666]: Loading rules from directory /etc/polkit-1/rules.d Jan 14 01:05:00.033738 polkitd[1666]: Loading rules from directory /run/polkit-1/rules.d Jan 14 01:05:00.033826 polkitd[1666]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 01:05:00.035144 polkitd[1666]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 14 01:05:00.038778 polkitd[1666]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 01:05:00.038856 polkitd[1666]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 14 01:05:00.046187 polkitd[1666]: Finished loading, compiling and executing 2 rules Jan 14 01:05:00.047800 systemd[1]: Started polkit.service - Authorization Manager. Jan 14 01:05:00.054292 dbus-daemon[1548]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 14 01:05:00.057191 polkitd[1666]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 14 01:05:00.131081 systemd-hostnamed[1640]: Hostname set to (transient) Jan 14 01:05:00.134905 systemd-resolved[1255]: System hostname changed to 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3'. Jan 14 01:05:00.202936 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:05:00.253317 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:05:00.271883 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:05:00.332766 containerd[1606]: time="2026-01-14T01:05:00.332702266Z" level=info msg="Start subscribing containerd event" Jan 14 01:05:00.335151 containerd[1606]: time="2026-01-14T01:05:00.334384007Z" level=info msg="Start recovering state" Jan 14 01:05:00.335151 containerd[1606]: time="2026-01-14T01:05:00.334544919Z" level=info msg="Start event monitor" Jan 14 01:05:00.335151 containerd[1606]: time="2026-01-14T01:05:00.334567579Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:05:00.335151 containerd[1606]: time="2026-01-14T01:05:00.334581210Z" level=info msg="Start streaming server" Jan 14 01:05:00.335151 containerd[1606]: time="2026-01-14T01:05:00.334593857Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:05:00.335151 containerd[1606]: time="2026-01-14T01:05:00.334606366Z" level=info msg="runtime interface starting up..." Jan 14 01:05:00.335151 containerd[1606]: time="2026-01-14T01:05:00.334616796Z" level=info msg="starting plugins..." Jan 14 01:05:00.335151 containerd[1606]: time="2026-01-14T01:05:00.334637055Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:05:00.335710 containerd[1606]: time="2026-01-14T01:05:00.335682629Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:05:00.338372 containerd[1606]: time="2026-01-14T01:05:00.337933476Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:05:00.338868 containerd[1606]: time="2026-01-14T01:05:00.338693263Z" level=info msg="containerd successfully booted in 0.562056s" Jan 14 01:05:00.339198 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:05:00.350852 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:05:00.351422 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:05:00.365746 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:05:00.416726 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:05:00.430951 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:05:00.445886 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:05:00.454806 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:05:00.514003 tar[1591]: linux-amd64/README.md Jan 14 01:05:00.546974 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:05:00.740428 instance-setup[1654]: INFO Running google_set_multiqueue. Jan 14 01:05:00.763635 instance-setup[1654]: INFO Set channels for eth0 to 2. Jan 14 01:05:00.769216 instance-setup[1654]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 14 01:05:00.771472 instance-setup[1654]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 14 01:05:00.771725 instance-setup[1654]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 14 01:05:00.773563 instance-setup[1654]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 14 01:05:00.773902 instance-setup[1654]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 14 01:05:00.776498 instance-setup[1654]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 14 01:05:00.776781 instance-setup[1654]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 14 01:05:00.779058 instance-setup[1654]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 14 01:05:00.791704 instance-setup[1654]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 14 01:05:00.796042 instance-setup[1654]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 14 01:05:00.798558 instance-setup[1654]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 14 01:05:00.798627 instance-setup[1654]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 14 01:05:00.822085 init.sh[1650]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 14 01:05:00.994962 startup-script[1738]: INFO Starting startup scripts. Jan 14 01:05:01.006236 startup-script[1738]: INFO No startup scripts found in metadata. Jan 14 01:05:01.006318 startup-script[1738]: INFO Finished running startup scripts. Jan 14 01:05:01.030480 init.sh[1650]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 14 01:05:01.030480 init.sh[1650]: + daemon_pids=() Jan 14 01:05:01.030664 init.sh[1650]: + for d in accounts clock_skew network Jan 14 01:05:01.030878 init.sh[1650]: + daemon_pids+=($!) Jan 14 01:05:01.030878 init.sh[1650]: + for d in accounts clock_skew network Jan 14 01:05:01.031447 init.sh[1650]: + daemon_pids+=($!) Jan 14 01:05:01.031447 init.sh[1650]: + for d in accounts clock_skew network Jan 14 01:05:01.031567 init.sh[1741]: + /usr/bin/google_accounts_daemon Jan 14 01:05:01.031919 init.sh[1650]: + daemon_pids+=($!) Jan 14 01:05:01.031919 init.sh[1650]: + NOTIFY_SOCKET=/run/systemd/notify Jan 14 01:05:01.031919 init.sh[1650]: + /usr/bin/systemd-notify --ready Jan 14 01:05:01.033686 init.sh[1742]: + /usr/bin/google_clock_skew_daemon Jan 14 01:05:01.034424 init.sh[1743]: + /usr/bin/google_network_daemon Jan 14 01:05:01.054262 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 14 01:05:01.067318 init.sh[1650]: + wait -n 1741 1742 1743 Jan 14 01:05:01.460166 google-clock-skew[1742]: INFO Starting Google Clock Skew daemon. Jan 14 01:05:01.474190 google-clock-skew[1742]: INFO Clock drift token has changed: 0. Jan 14 01:05:01.477919 google-networking[1743]: INFO Starting Google Networking daemon. Jan 14 01:05:01.479038 ntpd[1559]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:2a%2]:123 Jan 14 01:05:01.480357 ntpd[1559]: 14 Jan 01:05:01 ntpd[1559]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:2a%2]:123 Jan 14 01:05:01.524558 groupadd[1753]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 14 01:05:01.530878 groupadd[1753]: group added to /etc/gshadow: name=google-sudoers Jan 14 01:05:01.693477 groupadd[1753]: new group: name=google-sudoers, GID=1000 Jan 14 01:05:01.751073 google-accounts[1741]: INFO Starting Google Accounts daemon. Jan 14 01:05:01.771657 google-accounts[1741]: WARNING OS Login not installed. Jan 14 01:05:01.773651 google-accounts[1741]: INFO Creating a new user account for 0. Jan 14 01:05:01.779194 init.sh[1764]: useradd: invalid user name '0': use --badname to ignore Jan 14 01:05:01.780703 google-accounts[1741]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 14 01:05:01.800696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:05:01.812495 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:05:01.816056 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:05:01.821848 systemd[1]: Startup finished in 2.978s (kernel) + 9.759s (initrd) + 10.555s (userspace) = 23.293s. Jan 14 01:05:02.000300 systemd-resolved[1255]: Clock change detected. Flushing caches. Jan 14 01:05:02.001046 google-clock-skew[1742]: INFO Synced system time with hardware clock. Jan 14 01:05:02.541385 kubelet[1769]: E0114 01:05:02.541305 1769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:05:02.544407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:05:02.544669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:05:02.545483 systemd[1]: kubelet.service: Consumed 1.339s CPU time, 267.7M memory peak. Jan 14 01:05:07.672283 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:05:07.674351 systemd[1]: Started sshd@0-10.128.0.42:22-4.153.228.146:40908.service - OpenSSH per-connection server daemon (4.153.228.146:40908). Jan 14 01:05:08.043649 sshd[1781]: Accepted publickey for core from 4.153.228.146 port 40908 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:05:08.045549 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:05:08.058696 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:05:08.060570 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:05:08.064726 systemd-logind[1572]: New session 1 of user core. Jan 14 01:05:08.089738 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:05:08.093721 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:05:08.134653 (systemd)[1787]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:05:08.139135 systemd-logind[1572]: New session 2 of user core. Jan 14 01:05:08.311820 systemd[1787]: Queued start job for default target default.target. Jan 14 01:05:08.321162 systemd[1787]: Created slice app.slice - User Application Slice. Jan 14 01:05:08.321217 systemd[1787]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:05:08.321242 systemd[1787]: Reached target paths.target - Paths. Jan 14 01:05:08.321514 systemd[1787]: Reached target timers.target - Timers. Jan 14 01:05:08.323274 systemd[1787]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:05:08.326351 systemd[1787]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:05:08.351881 systemd[1787]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:05:08.352906 systemd[1787]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:05:08.353063 systemd[1787]: Reached target sockets.target - Sockets. Jan 14 01:05:08.353414 systemd[1787]: Reached target basic.target - Basic System. Jan 14 01:05:08.353652 systemd[1787]: Reached target default.target - Main User Target. Jan 14 01:05:08.353811 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:05:08.353939 systemd[1787]: Startup finished in 206ms. Jan 14 01:05:08.367526 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:05:08.549765 systemd[1]: Started sshd@1-10.128.0.42:22-4.153.228.146:40916.service - OpenSSH per-connection server daemon (4.153.228.146:40916). Jan 14 01:05:08.892459 sshd[1801]: Accepted publickey for core from 4.153.228.146 port 40916 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:05:08.894322 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:05:08.902489 systemd-logind[1572]: New session 3 of user core. Jan 14 01:05:08.913409 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:05:09.058531 sshd[1805]: Connection closed by 4.153.228.146 port 40916 Jan 14 01:05:09.059379 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Jan 14 01:05:09.065057 systemd[1]: sshd@1-10.128.0.42:22-4.153.228.146:40916.service: Deactivated successfully. Jan 14 01:05:09.067661 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 01:05:09.070629 systemd-logind[1572]: Session 3 logged out. Waiting for processes to exit. Jan 14 01:05:09.072401 systemd-logind[1572]: Removed session 3. Jan 14 01:05:09.122839 systemd[1]: Started sshd@2-10.128.0.42:22-4.153.228.146:40918.service - OpenSSH per-connection server daemon (4.153.228.146:40918). Jan 14 01:05:09.457582 sshd[1812]: Accepted publickey for core from 4.153.228.146 port 40918 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:05:09.459380 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:05:09.467139 systemd-logind[1572]: New session 4 of user core. Jan 14 01:05:09.476430 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:05:09.617367 sshd[1816]: Connection closed by 4.153.228.146 port 40918 Jan 14 01:05:09.619332 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 14 01:05:09.626488 systemd[1]: sshd@2-10.128.0.42:22-4.153.228.146:40918.service: Deactivated successfully. Jan 14 01:05:09.629190 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 01:05:09.631304 systemd-logind[1572]: Session 4 logged out. Waiting for processes to exit. Jan 14 01:05:09.633483 systemd-logind[1572]: Removed session 4. Jan 14 01:05:09.684781 systemd[1]: Started sshd@3-10.128.0.42:22-4.153.228.146:40928.service - OpenSSH per-connection server daemon (4.153.228.146:40928). Jan 14 01:05:10.017900 sshd[1822]: Accepted publickey for core from 4.153.228.146 port 40928 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:05:10.019358 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:05:10.025569 systemd-logind[1572]: New session 5 of user core. Jan 14 01:05:10.036379 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:05:10.184255 sshd[1826]: Connection closed by 4.153.228.146 port 40928 Jan 14 01:05:10.185391 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 14 01:05:10.191727 systemd-logind[1572]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:05:10.192688 systemd[1]: sshd@3-10.128.0.42:22-4.153.228.146:40928.service: Deactivated successfully. Jan 14 01:05:10.195410 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:05:10.198150 systemd-logind[1572]: Removed session 5. Jan 14 01:05:10.260676 systemd[1]: Started sshd@4-10.128.0.42:22-4.153.228.146:40938.service - OpenSSH per-connection server daemon (4.153.228.146:40938). Jan 14 01:05:10.602459 sshd[1832]: Accepted publickey for core from 4.153.228.146 port 40938 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:05:10.604259 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:05:10.612139 systemd-logind[1572]: New session 6 of user core. Jan 14 01:05:10.627433 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:05:10.743156 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:05:10.743738 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:05:10.758557 sudo[1837]: pam_unix(sudo:session): session closed for user root Jan 14 01:05:10.812094 sshd[1836]: Connection closed by 4.153.228.146 port 40938 Jan 14 01:05:10.813490 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Jan 14 01:05:10.820734 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:05:10.821378 systemd[1]: sshd@4-10.128.0.42:22-4.153.228.146:40938.service: Deactivated successfully. Jan 14 01:05:10.824047 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:05:10.826683 systemd-logind[1572]: Removed session 6. Jan 14 01:05:10.890934 systemd[1]: Started sshd@5-10.128.0.42:22-4.153.228.146:40950.service - OpenSSH per-connection server daemon (4.153.228.146:40950). Jan 14 01:05:11.228636 sshd[1844]: Accepted publickey for core from 4.153.228.146 port 40950 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:05:11.230243 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:05:11.238839 systemd-logind[1572]: New session 7 of user core. Jan 14 01:05:11.245414 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:05:11.351989 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:05:11.352562 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:05:11.356832 sudo[1850]: pam_unix(sudo:session): session closed for user root Jan 14 01:05:11.372954 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:05:11.373521 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:05:11.384175 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:05:11.441000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:05:11.443126 augenrules[1874]: No rules Jan 14 01:05:11.448090 kernel: kauditd_printk_skb: 79 callbacks suppressed Jan 14 01:05:11.448228 kernel: audit: type=1305 audit(1768352711.441:215): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:05:11.450162 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:05:11.450530 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:05:11.453340 sudo[1849]: pam_unix(sudo:session): session closed for user root Jan 14 01:05:11.441000 audit[1874]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe217f9830 a2=420 a3=0 items=0 ppid=1855 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:11.493438 kernel: audit: type=1300 audit(1768352711.441:215): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe217f9830 a2=420 a3=0 items=0 ppid=1855 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:11.493628 kernel: audit: type=1327 audit(1768352711.441:215): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:05:11.441000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:05:11.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.516100 sshd[1848]: Connection closed by 4.153.228.146 port 40950 Jan 14 01:05:11.516801 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Jan 14 01:05:11.526655 systemd[1]: sshd@5-10.128.0.42:22-4.153.228.146:40950.service: Deactivated successfully. Jan 14 01:05:11.529105 kernel: audit: type=1130 audit(1768352711.447:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.530151 kernel: audit: type=1131 audit(1768352711.447:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.530648 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:05:11.533154 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:05:11.537040 systemd-logind[1572]: Removed session 7. Jan 14 01:05:11.447000 audit[1849]: USER_END pid=1849 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.574896 kernel: audit: type=1106 audit(1768352711.447:218): pid=1849 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.575436 kernel: audit: type=1104 audit(1768352711.447:219): pid=1849 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.447000 audit[1849]: CRED_DISP pid=1849 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.520000 audit[1844]: USER_END pid=1844 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:11.632886 kernel: audit: type=1106 audit(1768352711.520:220): pid=1844 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:11.633047 kernel: audit: type=1104 audit(1768352711.520:221): pid=1844 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:11.520000 audit[1844]: CRED_DISP pid=1844 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:11.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.42:22-4.153.228.146:40950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.682971 kernel: audit: type=1131 audit(1768352711.526:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.42:22-4.153.228.146:40950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:11.690884 systemd[1]: Started sshd@6-10.128.0.42:22-4.153.228.146:40952.service - OpenSSH per-connection server daemon (4.153.228.146:40952). Jan 14 01:05:11.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.42:22-4.153.228.146:40952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:12.011000 audit[1883]: USER_ACCT pid=1883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:12.012990 sshd[1883]: Accepted publickey for core from 4.153.228.146 port 40952 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:05:12.012000 audit[1883]: CRED_ACQ pid=1883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:12.012000 audit[1883]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3a4916a0 a2=3 a3=0 items=0 ppid=1 pid=1883 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:12.012000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:05:12.014703 sshd-session[1883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:05:12.022470 systemd-logind[1572]: New session 8 of user core. Jan 14 01:05:12.029352 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:05:12.032000 audit[1883]: USER_START pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:12.035000 audit[1887]: CRED_ACQ pid=1887 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:12.131000 audit[1888]: USER_ACCT pid=1888 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:12.132885 sudo[1888]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:05:12.132000 audit[1888]: CRED_REFR pid=1888 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:12.132000 audit[1888]: USER_START pid=1888 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:12.133463 sudo[1888]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:05:12.686051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:05:12.689517 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:05:12.694407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:05:12.699654 (dockerd)[1907]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:05:13.065020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:05:13.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:13.086517 (kubelet)[1920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:05:13.166575 kubelet[1920]: E0114 01:05:13.166505 1920 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:05:13.174456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:05:13.174976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:05:13.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:05:13.176309 systemd[1]: kubelet.service: Consumed 257ms CPU time, 110.4M memory peak. Jan 14 01:05:13.214742 dockerd[1907]: time="2026-01-14T01:05:13.214663377Z" level=info msg="Starting up" Jan 14 01:05:13.218959 dockerd[1907]: time="2026-01-14T01:05:13.218905535Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:05:13.235534 dockerd[1907]: time="2026-01-14T01:05:13.235416553Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:05:13.294285 dockerd[1907]: time="2026-01-14T01:05:13.293929096Z" level=info msg="Loading containers: start." Jan 14 01:05:13.314122 kernel: Initializing XFRM netlink socket Jan 14 01:05:13.399000 audit[1970]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.399000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffc2e20cd0 a2=0 a3=0 items=0 ppid=1907 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.399000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:05:13.402000 audit[1972]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.402000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc7d33eb50 a2=0 a3=0 items=0 ppid=1907 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.402000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:05:13.405000 audit[1974]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.405000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3a2fb5d0 a2=0 a3=0 items=0 ppid=1907 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:05:13.408000 audit[1976]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.408000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff64d29910 a2=0 a3=0 items=0 ppid=1907 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:05:13.411000 audit[1978]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.411000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd69ba4340 a2=0 a3=0 items=0 ppid=1907 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.411000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:05:13.414000 audit[1980]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.414000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd8512fb80 a2=0 a3=0 items=0 ppid=1907 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.414000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:05:13.417000 audit[1982]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.417000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe2e71b50 a2=0 a3=0 items=0 ppid=1907 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.417000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:05:13.421000 audit[1984]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.421000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe186ff120 a2=0 a3=0 items=0 ppid=1907 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.421000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:05:13.464000 audit[1987]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.464000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffea299f260 a2=0 a3=0 items=0 ppid=1907 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.464000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:05:13.469000 audit[1990]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.469000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffcec08ea0 a2=0 a3=0 items=0 ppid=1907 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.469000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:05:13.473000 audit[1992]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.473000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc2ac38060 a2=0 a3=0 items=0 ppid=1907 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:05:13.476000 audit[1994]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.476000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe1615dd70 a2=0 a3=0 items=0 ppid=1907 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:05:13.480000 audit[1996]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.480000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc35fc7b50 a2=0 a3=0 items=0 ppid=1907 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:05:13.540000 audit[2026]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.540000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffa7b1b800 a2=0 a3=0 items=0 ppid=1907 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.540000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:05:13.543000 audit[2028]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.543000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff5fd451f0 a2=0 a3=0 items=0 ppid=1907 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.543000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:05:13.546000 audit[2030]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.546000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd5daf0e0 a2=0 a3=0 items=0 ppid=1907 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.546000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:05:13.549000 audit[2032]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.549000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2e2ae830 a2=0 a3=0 items=0 ppid=1907 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.549000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:05:13.552000 audit[2034]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.552000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdd39f3c20 a2=0 a3=0 items=0 ppid=1907 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.552000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:05:13.555000 audit[2036]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.555000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe62736530 a2=0 a3=0 items=0 ppid=1907 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.555000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:05:13.558000 audit[2038]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.558000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc4bda0c20 a2=0 a3=0 items=0 ppid=1907 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:05:13.562000 audit[2040]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.562000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcf532a850 a2=0 a3=0 items=0 ppid=1907 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.562000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:05:13.566000 audit[2042]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.566000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffec91cc0d0 a2=0 a3=0 items=0 ppid=1907 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.566000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:05:13.569000 audit[2044]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.569000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeefe64520 a2=0 a3=0 items=0 ppid=1907 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.569000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:05:13.572000 audit[2046]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.572000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc7b9d3640 a2=0 a3=0 items=0 ppid=1907 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.572000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:05:13.576000 audit[2048]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.576000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc30d7a6a0 a2=0 a3=0 items=0 ppid=1907 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.576000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:05:13.579000 audit[2050]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.579000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd0d1cba90 a2=0 a3=0 items=0 ppid=1907 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.579000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:05:13.587000 audit[2055]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.587000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0979c080 a2=0 a3=0 items=0 ppid=1907 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.587000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:05:13.590000 audit[2057]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.590000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdbea70c60 a2=0 a3=0 items=0 ppid=1907 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:05:13.594000 audit[2059]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.594000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff7a4e9730 a2=0 a3=0 items=0 ppid=1907 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.594000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:05:13.597000 audit[2061]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.597000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe305e7a80 a2=0 a3=0 items=0 ppid=1907 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.597000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:05:13.600000 audit[2063]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.600000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcb4564880 a2=0 a3=0 items=0 ppid=1907 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:05:13.603000 audit[2065]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:13.603000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe3386acd0 a2=0 a3=0 items=0 ppid=1907 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:05:13.634000 audit[2070]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.634000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffee7ce980 a2=0 a3=0 items=0 ppid=1907 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:05:13.638000 audit[2073]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.638000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc53db0540 a2=0 a3=0 items=0 ppid=1907 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:05:13.653000 audit[2081]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.653000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffce7df1630 a2=0 a3=0 items=0 ppid=1907 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.653000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:05:13.671000 audit[2087]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.671000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc8d511250 a2=0 a3=0 items=0 ppid=1907 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.671000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:05:13.675000 audit[2089]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.675000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd4a39c9c0 a2=0 a3=0 items=0 ppid=1907 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.675000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:05:13.679000 audit[2091]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.679000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd4dd75b80 a2=0 a3=0 items=0 ppid=1907 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:05:13.682000 audit[2093]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.682000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffeb9e0d5f0 a2=0 a3=0 items=0 ppid=1907 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:05:13.685000 audit[2095]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:13.685000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffdcb43110 a2=0 a3=0 items=0 ppid=1907 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:13.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:05:13.687676 systemd-networkd[1503]: docker0: Link UP Jan 14 01:05:13.693591 dockerd[1907]: time="2026-01-14T01:05:13.693517958Z" level=info msg="Loading containers: done." Jan 14 01:05:13.719589 dockerd[1907]: time="2026-01-14T01:05:13.719518581Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:05:13.719880 dockerd[1907]: time="2026-01-14T01:05:13.719633504Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:05:13.719880 dockerd[1907]: time="2026-01-14T01:05:13.719761166Z" level=info msg="Initializing buildkit" Jan 14 01:05:13.754201 dockerd[1907]: time="2026-01-14T01:05:13.753879824Z" level=info msg="Completed buildkit initialization" Jan 14 01:05:13.764332 dockerd[1907]: time="2026-01-14T01:05:13.764265980Z" level=info msg="Daemon has completed initialization" Jan 14 01:05:13.764502 dockerd[1907]: time="2026-01-14T01:05:13.764346375Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:05:13.765002 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:05:13.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:14.740720 containerd[1606]: time="2026-01-14T01:05:14.740651036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 01:05:15.217001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273115260.mount: Deactivated successfully. Jan 14 01:05:16.772364 containerd[1606]: time="2026-01-14T01:05:16.772279575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:16.774056 containerd[1606]: time="2026-01-14T01:05:16.773940901Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 14 01:05:16.775937 containerd[1606]: time="2026-01-14T01:05:16.775766259Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:16.780029 containerd[1606]: time="2026-01-14T01:05:16.779897609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:16.782301 containerd[1606]: time="2026-01-14T01:05:16.782082418Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.041354923s" Jan 14 01:05:16.782301 containerd[1606]: time="2026-01-14T01:05:16.782135222Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 01:05:16.783301 containerd[1606]: time="2026-01-14T01:05:16.783249846Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 01:05:18.469899 containerd[1606]: time="2026-01-14T01:05:18.469831831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:18.471618 containerd[1606]: time="2026-01-14T01:05:18.471558227Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 01:05:18.473120 containerd[1606]: time="2026-01-14T01:05:18.472863485Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:18.476737 containerd[1606]: time="2026-01-14T01:05:18.476657268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:18.479064 containerd[1606]: time="2026-01-14T01:05:18.478163051Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.694824576s" Jan 14 01:05:18.479064 containerd[1606]: time="2026-01-14T01:05:18.478209980Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 01:05:18.479299 containerd[1606]: time="2026-01-14T01:05:18.479178033Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 01:05:19.772570 containerd[1606]: time="2026-01-14T01:05:19.772499917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:19.774439 containerd[1606]: time="2026-01-14T01:05:19.774393923Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=0" Jan 14 01:05:19.776355 containerd[1606]: time="2026-01-14T01:05:19.776299362Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:19.780500 containerd[1606]: time="2026-01-14T01:05:19.780458233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:19.782173 containerd[1606]: time="2026-01-14T01:05:19.781942728Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.302726708s" Jan 14 01:05:19.782173 containerd[1606]: time="2026-01-14T01:05:19.781987500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 01:05:19.782600 containerd[1606]: time="2026-01-14T01:05:19.782550366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 01:05:20.881586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308276128.mount: Deactivated successfully. Jan 14 01:05:21.579829 containerd[1606]: time="2026-01-14T01:05:21.579740178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:21.581443 containerd[1606]: time="2026-01-14T01:05:21.581384006Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20340589" Jan 14 01:05:21.583234 containerd[1606]: time="2026-01-14T01:05:21.583156725Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:21.586234 containerd[1606]: time="2026-01-14T01:05:21.586164766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:21.587271 containerd[1606]: time="2026-01-14T01:05:21.587081183Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.804466879s" Jan 14 01:05:21.587271 containerd[1606]: time="2026-01-14T01:05:21.587129307Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 01:05:21.588160 containerd[1606]: time="2026-01-14T01:05:21.588121966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 01:05:21.962784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328831212.mount: Deactivated successfully. Jan 14 01:05:23.213302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:05:23.216803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:05:23.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:23.572387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:05:23.601817 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 14 01:05:23.601986 kernel: audit: type=1130 audit(1768352723.572:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:23.609482 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:05:23.626112 containerd[1606]: time="2026-01-14T01:05:23.625181619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:23.627880 containerd[1606]: time="2026-01-14T01:05:23.627821955Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Jan 14 01:05:23.629186 containerd[1606]: time="2026-01-14T01:05:23.629132646Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:23.635893 containerd[1606]: time="2026-01-14T01:05:23.635844702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:23.639436 containerd[1606]: time="2026-01-14T01:05:23.639369669Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.05120308s" Jan 14 01:05:23.639635 containerd[1606]: time="2026-01-14T01:05:23.639610623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 01:05:23.641772 containerd[1606]: time="2026-01-14T01:05:23.641706483Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:05:23.678719 kubelet[2267]: E0114 01:05:23.678627 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:05:23.681553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:05:23.681799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:05:23.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:05:23.682507 systemd[1]: kubelet.service: Consumed 255ms CPU time, 110.6M memory peak. Jan 14 01:05:23.704135 kernel: audit: type=1131 audit(1768352723.681:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:05:24.241891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727408339.mount: Deactivated successfully. Jan 14 01:05:24.249521 containerd[1606]: time="2026-01-14T01:05:24.249454957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:05:24.250611 containerd[1606]: time="2026-01-14T01:05:24.250554313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:05:24.251812 containerd[1606]: time="2026-01-14T01:05:24.251746564Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:05:24.254549 containerd[1606]: time="2026-01-14T01:05:24.254486372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:05:24.256092 containerd[1606]: time="2026-01-14T01:05:24.255439210Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 613.679864ms" Jan 14 01:05:24.256092 containerd[1606]: time="2026-01-14T01:05:24.255485212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:05:24.256297 containerd[1606]: time="2026-01-14T01:05:24.256267863Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 01:05:24.772007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount620471459.mount: Deactivated successfully. Jan 14 01:05:27.028674 containerd[1606]: time="2026-01-14T01:05:27.028600865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:27.030311 containerd[1606]: time="2026-01-14T01:05:27.030100527Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Jan 14 01:05:27.031517 containerd[1606]: time="2026-01-14T01:05:27.031478441Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:27.035716 containerd[1606]: time="2026-01-14T01:05:27.035653759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:27.037530 containerd[1606]: time="2026-01-14T01:05:27.037371111Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.781063236s" Jan 14 01:05:27.037530 containerd[1606]: time="2026-01-14T01:05:27.037413547Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 01:05:29.903053 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 14 01:05:29.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:29.928278 kernel: audit: type=1131 audit(1768352729.903:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:29.935000 audit: BPF prog-id=62 op=UNLOAD Jan 14 01:05:29.944141 kernel: audit: type=1334 audit(1768352729.935:278): prog-id=62 op=UNLOAD Jan 14 01:05:32.115743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:05:32.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.116140 systemd[1]: kubelet.service: Consumed 255ms CPU time, 110.6M memory peak. Jan 14 01:05:32.124327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:05:32.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.160228 kernel: audit: type=1130 audit(1768352732.114:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.160364 kernel: audit: type=1131 audit(1768352732.115:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.186543 systemd[1]: Reload requested from client PID 2364 ('systemctl') (unit session-8.scope)... Jan 14 01:05:32.186570 systemd[1]: Reloading... Jan 14 01:05:32.399114 zram_generator::config[2411]: No configuration found. Jan 14 01:05:32.704238 systemd[1]: Reloading finished in 516 ms. Jan 14 01:05:32.754154 kernel: audit: type=1334 audit(1768352732.744:281): prog-id=66 op=LOAD Jan 14 01:05:32.744000 audit: BPF prog-id=66 op=LOAD Jan 14 01:05:32.744000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:05:32.745000 audit: BPF prog-id=67 op=LOAD Jan 14 01:05:32.771346 kernel: audit: type=1334 audit(1768352732.744:282): prog-id=65 op=UNLOAD Jan 14 01:05:32.771448 kernel: audit: type=1334 audit(1768352732.745:283): prog-id=67 op=LOAD Jan 14 01:05:32.771491 kernel: audit: type=1334 audit(1768352732.745:284): prog-id=46 op=UNLOAD Jan 14 01:05:32.745000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:05:32.745000 audit: BPF prog-id=68 op=LOAD Jan 14 01:05:32.785736 kernel: audit: type=1334 audit(1768352732.745:285): prog-id=68 op=LOAD Jan 14 01:05:32.745000 audit: BPF prog-id=69 op=LOAD Jan 14 01:05:32.796262 kernel: audit: type=1334 audit(1768352732.745:286): prog-id=69 op=LOAD Jan 14 01:05:32.745000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:05:32.745000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:05:32.748000 audit: BPF prog-id=70 op=LOAD Jan 14 01:05:32.749000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:05:32.749000 audit: BPF prog-id=71 op=LOAD Jan 14 01:05:32.749000 audit: BPF prog-id=72 op=LOAD Jan 14 01:05:32.749000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:05:32.749000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:05:32.750000 audit: BPF prog-id=73 op=LOAD Jan 14 01:05:32.750000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:05:32.750000 audit: BPF prog-id=74 op=LOAD Jan 14 01:05:32.750000 audit: BPF prog-id=75 op=LOAD Jan 14 01:05:32.750000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:05:32.750000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:05:32.751000 audit: BPF prog-id=76 op=LOAD Jan 14 01:05:32.751000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:05:32.751000 audit: BPF prog-id=77 op=LOAD Jan 14 01:05:32.751000 audit: BPF prog-id=78 op=LOAD Jan 14 01:05:32.751000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:05:32.751000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:05:32.752000 audit: BPF prog-id=79 op=LOAD Jan 14 01:05:32.752000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:05:32.752000 audit: BPF prog-id=80 op=LOAD Jan 14 01:05:32.753000 audit: BPF prog-id=81 op=LOAD Jan 14 01:05:32.753000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:05:32.753000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:05:32.754000 audit: BPF prog-id=82 op=LOAD Jan 14 01:05:32.754000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:05:32.754000 audit: BPF prog-id=83 op=LOAD Jan 14 01:05:32.754000 audit: BPF prog-id=84 op=LOAD Jan 14 01:05:32.754000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:05:32.754000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:05:32.782000 audit: BPF prog-id=85 op=LOAD Jan 14 01:05:32.782000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:05:32.809893 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:05:32.810029 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:05:32.810710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:05:32.810836 systemd[1]: kubelet.service: Consumed 171ms CPU time, 98.6M memory peak. Jan 14 01:05:32.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:05:32.813351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:05:33.111797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:05:33.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.127716 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:05:33.184321 kubelet[2462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:05:33.184321 kubelet[2462]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:05:33.184321 kubelet[2462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:05:33.184943 kubelet[2462]: I0114 01:05:33.184364 2462 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:05:33.766100 kubelet[2462]: I0114 01:05:33.765878 2462 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:05:33.766100 kubelet[2462]: I0114 01:05:33.765926 2462 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:05:33.767015 kubelet[2462]: I0114 01:05:33.766985 2462 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:05:33.835366 kubelet[2462]: I0114 01:05:33.835302 2462 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:05:33.835739 kubelet[2462]: E0114 01:05:33.835677 2462 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.42:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:05:33.849093 kubelet[2462]: I0114 01:05:33.849038 2462 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:05:33.854812 kubelet[2462]: I0114 01:05:33.854758 2462 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:05:33.855266 kubelet[2462]: I0114 01:05:33.855206 2462 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:05:33.855506 kubelet[2462]: I0114 01:05:33.855252 2462 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:05:33.855718 kubelet[2462]: I0114 01:05:33.855509 2462 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:05:33.855718 kubelet[2462]: I0114 01:05:33.855529 2462 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:05:33.855861 kubelet[2462]: I0114 01:05:33.855728 2462 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:05:33.861449 kubelet[2462]: I0114 01:05:33.861387 2462 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:05:33.861449 kubelet[2462]: I0114 01:05:33.861444 2462 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:05:33.861859 kubelet[2462]: I0114 01:05:33.861513 2462 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:05:33.864491 kubelet[2462]: I0114 01:05:33.864447 2462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:05:33.871749 kubelet[2462]: E0114 01:05:33.871298 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3&limit=500&resourceVersion=0\": dial tcp 10.128.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:05:33.873349 kubelet[2462]: E0114 01:05:33.873310 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:05:33.873473 kubelet[2462]: I0114 01:05:33.873449 2462 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:05:33.874466 kubelet[2462]: I0114 01:05:33.874433 2462 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:05:33.875901 kubelet[2462]: W0114 01:05:33.875857 2462 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:05:33.898034 kubelet[2462]: I0114 01:05:33.897488 2462 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:05:33.898034 kubelet[2462]: I0114 01:05:33.897580 2462 server.go:1289] "Started kubelet" Jan 14 01:05:33.901511 kubelet[2462]: I0114 01:05:33.901463 2462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:05:33.910128 kubelet[2462]: E0114 01:05:33.907029 2462 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3.188a73713d8d91a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,UID:ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,},FirstTimestamp:2026-01-14 01:05:33.89752772 +0000 UTC m=+0.763419089,LastTimestamp:2026-01-14 01:05:33.89752772 +0000 UTC m=+0.763419089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,}" Jan 14 01:05:33.910477 kubelet[2462]: I0114 01:05:33.910439 2462 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:05:33.911929 kubelet[2462]: I0114 01:05:33.911905 2462 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:05:33.911000 audit[2476]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:33.911000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe454d6140 a2=0 a3=0 items=0 ppid=2462 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:05:33.914000 audit[2478]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:33.914000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc794c1860 a2=0 a3=0 items=0 ppid=2462 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:05:33.917587 kubelet[2462]: I0114 01:05:33.917156 2462 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:05:33.917587 kubelet[2462]: E0114 01:05:33.917519 2462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" Jan 14 01:05:33.917897 kubelet[2462]: I0114 01:05:33.917841 2462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:05:33.918309 kubelet[2462]: I0114 01:05:33.918285 2462 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:05:33.918704 kubelet[2462]: I0114 01:05:33.918684 2462 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:05:33.920553 kubelet[2462]: I0114 01:05:33.920524 2462 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:05:33.920649 kubelet[2462]: I0114 01:05:33.920610 2462 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:05:33.921000 audit[2481]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:33.921000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffee0872310 a2=0 a3=0 items=0 ppid=2462 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:05:33.924836 kubelet[2462]: E0114 01:05:33.924804 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:05:33.925101 kubelet[2462]: E0114 01:05:33.925033 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3?timeout=10s\": dial tcp 10.128.0.42:6443: connect: connection refused" interval="200ms" Jan 14 01:05:33.925290 kubelet[2462]: E0114 01:05:33.925269 2462 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:05:33.925780 kubelet[2462]: I0114 01:05:33.925745 2462 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:05:33.926008 kubelet[2462]: I0114 01:05:33.925983 2462 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:05:33.925000 audit[2483]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:33.925000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff09d39870 a2=0 a3=0 items=0 ppid=2462 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:05:33.928880 kubelet[2462]: I0114 01:05:33.928819 2462 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:05:33.946000 audit[2489]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:33.946000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc4eca33b0 a2=0 a3=0 items=0 ppid=2462 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:05:33.947886 kubelet[2462]: I0114 01:05:33.947559 2462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:05:33.947000 audit[2490]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:33.947000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff84403300 a2=0 a3=0 items=0 ppid=2462 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:05:33.949739 kubelet[2462]: I0114 01:05:33.949669 2462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:05:33.949739 kubelet[2462]: I0114 01:05:33.949695 2462 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:05:33.949739 kubelet[2462]: I0114 01:05:33.949723 2462 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:05:33.949739 kubelet[2462]: I0114 01:05:33.949734 2462 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:05:33.949928 kubelet[2462]: E0114 01:05:33.949793 2462 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:05:33.951000 audit[2491]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:33.951000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe6be2530 a2=0 a3=0 items=0 ppid=2462 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:05:33.953797 kubelet[2462]: E0114 01:05:33.953755 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:05:33.952000 audit[2492]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:33.952000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc0488620 a2=0 a3=0 items=0 ppid=2462 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:05:33.955637 kubelet[2462]: I0114 01:05:33.955611 2462 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:05:33.955734 kubelet[2462]: I0114 01:05:33.955664 2462 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:05:33.955734 kubelet[2462]: I0114 01:05:33.955690 2462 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:05:33.955000 audit[2495]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:33.955000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe67569420 a2=0 a3=0 items=0 ppid=2462 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:05:33.957000 audit[2496]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:33.957000 audit[2497]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:33.957000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeffbf5b70 a2=0 a3=0 items=0 ppid=2462 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.957000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:05:33.957000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdec103530 a2=0 a3=0 items=0 ppid=2462 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:05:33.959818 kubelet[2462]: I0114 01:05:33.959763 2462 policy_none.go:49] "None policy: Start" Jan 14 01:05:33.959818 kubelet[2462]: I0114 01:05:33.959794 2462 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:05:33.959818 kubelet[2462]: I0114 01:05:33.959813 2462 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:05:33.960000 audit[2498]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:33.960000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffbeee2620 a2=0 a3=0 items=0 ppid=2462 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:33.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:05:33.968613 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:05:33.986205 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:05:33.991773 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:05:34.000463 kubelet[2462]: E0114 01:05:34.000408 2462 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:05:34.000855 kubelet[2462]: I0114 01:05:34.000826 2462 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:05:34.000949 kubelet[2462]: I0114 01:05:34.000856 2462 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:05:34.001574 kubelet[2462]: I0114 01:05:34.001528 2462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:05:34.003373 kubelet[2462]: E0114 01:05:34.003341 2462 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:05:34.003463 kubelet[2462]: E0114 01:05:34.003413 2462 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" Jan 14 01:05:34.034694 kubelet[2462]: E0114 01:05:34.034437 2462 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3.188a73713d8d91a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,UID:ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,},FirstTimestamp:2026-01-14 01:05:33.89752772 +0000 UTC m=+0.763419089,LastTimestamp:2026-01-14 01:05:33.89752772 +0000 UTC m=+0.763419089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,}" Jan 14 01:05:34.075355 systemd[1]: Created slice kubepods-burstable-podf8bf4ef7793709ae84c12370217c6ca9.slice - libcontainer container kubepods-burstable-podf8bf4ef7793709ae84c12370217c6ca9.slice. Jan 14 01:05:34.084557 kubelet[2462]: E0114 01:05:34.084486 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.091970 systemd[1]: Created slice kubepods-burstable-podf71be2024290dbe9ed6d15121bf496f0.slice - libcontainer container kubepods-burstable-podf71be2024290dbe9ed6d15121bf496f0.slice. Jan 14 01:05:34.101844 kubelet[2462]: E0114 01:05:34.101789 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.105591 kubelet[2462]: I0114 01:05:34.105517 2462 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.105834 systemd[1]: Created slice kubepods-burstable-pod18ce3955e603ff564e0f3d4884e724ab.slice - libcontainer container kubepods-burstable-pod18ce3955e603ff564e0f3d4884e724ab.slice. Jan 14 01:05:34.107103 kubelet[2462]: E0114 01:05:34.106754 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.42:6443/api/v1/nodes\": dial tcp 10.128.0.42:6443: connect: connection refused" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.109002 kubelet[2462]: E0114 01:05:34.108950 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.126740 kubelet[2462]: E0114 01:05:34.126654 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3?timeout=10s\": dial tcp 10.128.0.42:6443: connect: connection refused" interval="400ms" Jan 14 01:05:34.222214 kubelet[2462]: I0114 01:05:34.222150 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18ce3955e603ff564e0f3d4884e724ab-kubeconfig\") pod \"kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"18ce3955e603ff564e0f3d4884e724ab\") " pod="kube-system/kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.222214 kubelet[2462]: I0114 01:05:34.222212 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8bf4ef7793709ae84c12370217c6ca9-ca-certs\") pod \"kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f8bf4ef7793709ae84c12370217c6ca9\") " pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.222930 kubelet[2462]: I0114 01:05:34.222246 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8bf4ef7793709ae84c12370217c6ca9-k8s-certs\") pod \"kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f8bf4ef7793709ae84c12370217c6ca9\") " pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.222930 kubelet[2462]: I0114 01:05:34.222273 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8bf4ef7793709ae84c12370217c6ca9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f8bf4ef7793709ae84c12370217c6ca9\") " pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.222930 kubelet[2462]: I0114 01:05:34.222306 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-ca-certs\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.222930 kubelet[2462]: I0114 01:05:34.222335 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-kubeconfig\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.223103 kubelet[2462]: I0114 01:05:34.222374 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-flexvolume-dir\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.223103 kubelet[2462]: I0114 01:05:34.222477 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-k8s-certs\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.223103 kubelet[2462]: I0114 01:05:34.222518 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.311658 kubelet[2462]: I0114 01:05:34.311503 2462 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.312659 kubelet[2462]: E0114 01:05:34.311911 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.42:6443/api/v1/nodes\": dial tcp 10.128.0.42:6443: connect: connection refused" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.386404 containerd[1606]: time="2026-01-14T01:05:34.386342405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,Uid:f8bf4ef7793709ae84c12370217c6ca9,Namespace:kube-system,Attempt:0,}" Jan 14 01:05:34.403248 containerd[1606]: time="2026-01-14T01:05:34.403149354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,Uid:f71be2024290dbe9ed6d15121bf496f0,Namespace:kube-system,Attempt:0,}" Jan 14 01:05:34.413852 containerd[1606]: time="2026-01-14T01:05:34.413156311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,Uid:18ce3955e603ff564e0f3d4884e724ab,Namespace:kube-system,Attempt:0,}" Jan 14 01:05:34.427602 containerd[1606]: time="2026-01-14T01:05:34.427462095Z" level=info msg="connecting to shim eb8fc4ffbacca27a7488056db1fa4bfcdc5fe869c93ad49d87fa6c36bbdd6825" address="unix:///run/containerd/s/fc85b5bd2665432b4fa0e7fd02559547591debd3026c9240d348d13281e35a56" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:05:34.488136 containerd[1606]: time="2026-01-14T01:05:34.486917089Z" level=info msg="connecting to shim 30bdc06f202f1a20b14bc5f94b09a61c733ea47cf84667545e32e51c672ac1df" address="unix:///run/containerd/s/b4f8e3b07231126f1e9c53b875c203f0458d830ddd50323453e779fe587a8e83" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:05:34.508428 systemd[1]: Started cri-containerd-eb8fc4ffbacca27a7488056db1fa4bfcdc5fe869c93ad49d87fa6c36bbdd6825.scope - libcontainer container eb8fc4ffbacca27a7488056db1fa4bfcdc5fe869c93ad49d87fa6c36bbdd6825. Jan 14 01:05:34.527562 kubelet[2462]: E0114 01:05:34.527503 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3?timeout=10s\": dial tcp 10.128.0.42:6443: connect: connection refused" interval="800ms" Jan 14 01:05:34.544000 audit: BPF prog-id=86 op=LOAD Jan 14 01:05:34.552901 containerd[1606]: time="2026-01-14T01:05:34.552834506Z" level=info msg="connecting to shim f0d6c4ee9b99fae0ddd691b025e195de297c30c2478d14f20edef7899186de96" address="unix:///run/containerd/s/ecce1a62ca8f182c0a46a96e1c104a024017dba3ac3f0babc56d269da4b95224" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:05:34.556665 systemd[1]: Started cri-containerd-30bdc06f202f1a20b14bc5f94b09a61c733ea47cf84667545e32e51c672ac1df.scope - libcontainer container 30bdc06f202f1a20b14bc5f94b09a61c733ea47cf84667545e32e51c672ac1df. Jan 14 01:05:34.557000 audit: BPF prog-id=87 op=LOAD Jan 14 01:05:34.557000 audit[2520]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386663346666626163636132376137343838303536646231666134 Jan 14 01:05:34.558000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:05:34.558000 audit[2520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386663346666626163636132376137343838303536646231666134 Jan 14 01:05:34.559000 audit: BPF prog-id=88 op=LOAD Jan 14 01:05:34.559000 audit[2520]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386663346666626163636132376137343838303536646231666134 Jan 14 01:05:34.559000 audit: BPF prog-id=89 op=LOAD Jan 14 01:05:34.559000 audit[2520]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386663346666626163636132376137343838303536646231666134 Jan 14 01:05:34.560000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:05:34.560000 audit[2520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386663346666626163636132376137343838303536646231666134 Jan 14 01:05:34.560000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:05:34.560000 audit[2520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386663346666626163636132376137343838303536646231666134 Jan 14 01:05:34.560000 audit: BPF prog-id=90 op=LOAD Jan 14 01:05:34.560000 audit[2520]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386663346666626163636132376137343838303536646231666134 Jan 14 01:05:34.600030 systemd[1]: Started cri-containerd-f0d6c4ee9b99fae0ddd691b025e195de297c30c2478d14f20edef7899186de96.scope - libcontainer container f0d6c4ee9b99fae0ddd691b025e195de297c30c2478d14f20edef7899186de96. Jan 14 01:05:34.604000 audit: BPF prog-id=91 op=LOAD Jan 14 01:05:34.607000 audit: BPF prog-id=92 op=LOAD Jan 14 01:05:34.607000 audit[2553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f2238 a2=98 a3=0 items=0 ppid=2540 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330626463303666323032663161323062313462633566393462303961 Jan 14 01:05:34.607000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:05:34.607000 audit[2553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330626463303666323032663161323062313462633566393462303961 Jan 14 01:05:34.607000 audit: BPF prog-id=93 op=LOAD Jan 14 01:05:34.607000 audit[2553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f2488 a2=98 a3=0 items=0 ppid=2540 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330626463303666323032663161323062313462633566393462303961 Jan 14 01:05:34.607000 audit: BPF prog-id=94 op=LOAD Jan 14 01:05:34.607000 audit[2553]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001f2218 a2=98 a3=0 items=0 ppid=2540 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330626463303666323032663161323062313462633566393462303961 Jan 14 01:05:34.607000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:05:34.607000 audit[2553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330626463303666323032663161323062313462633566393462303961 Jan 14 01:05:34.607000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:05:34.607000 audit[2553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330626463303666323032663161323062313462633566393462303961 Jan 14 01:05:34.607000 audit: BPF prog-id=95 op=LOAD Jan 14 01:05:34.607000 audit[2553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f26e8 a2=98 a3=0 items=0 ppid=2540 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330626463303666323032663161323062313462633566393462303961 Jan 14 01:05:34.634000 audit: BPF prog-id=96 op=LOAD Jan 14 01:05:34.636000 audit: BPF prog-id=97 op=LOAD Jan 14 01:05:34.636000 audit[2592]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2578 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643663346565396239396661653064646436393162303235653139 Jan 14 01:05:34.637000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:05:34.637000 audit[2592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643663346565396239396661653064646436393162303235653139 Jan 14 01:05:34.638000 audit: BPF prog-id=98 op=LOAD Jan 14 01:05:34.638000 audit[2592]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2578 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643663346565396239396661653064646436393162303235653139 Jan 14 01:05:34.638000 audit: BPF prog-id=99 op=LOAD Jan 14 01:05:34.638000 audit[2592]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2578 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643663346565396239396661653064646436393162303235653139 Jan 14 01:05:34.638000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:05:34.638000 audit[2592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643663346565396239396661653064646436393162303235653139 Jan 14 01:05:34.638000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:05:34.638000 audit[2592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643663346565396239396661653064646436393162303235653139 Jan 14 01:05:34.638000 audit: BPF prog-id=100 op=LOAD Jan 14 01:05:34.638000 audit[2592]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2578 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643663346565396239396661653064646436393162303235653139 Jan 14 01:05:34.662780 containerd[1606]: time="2026-01-14T01:05:34.662725357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,Uid:f8bf4ef7793709ae84c12370217c6ca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb8fc4ffbacca27a7488056db1fa4bfcdc5fe869c93ad49d87fa6c36bbdd6825\"" Jan 14 01:05:34.674115 containerd[1606]: time="2026-01-14T01:05:34.673728875Z" level=info msg="CreateContainer within sandbox \"eb8fc4ffbacca27a7488056db1fa4bfcdc5fe869c93ad49d87fa6c36bbdd6825\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:05:34.702183 containerd[1606]: time="2026-01-14T01:05:34.701128210Z" level=info msg="Container fdb19c9904e5a36f1bf42424c0432277b954c14155a8e40a7279ca77a146e32e: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:05:34.715465 containerd[1606]: time="2026-01-14T01:05:34.714996757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,Uid:f71be2024290dbe9ed6d15121bf496f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"30bdc06f202f1a20b14bc5f94b09a61c733ea47cf84667545e32e51c672ac1df\"" Jan 14 01:05:34.718003 kubelet[2462]: I0114 01:05:34.717970 2462 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.719038 kubelet[2462]: E0114 01:05:34.718984 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.42:6443/api/v1/nodes\": dial tcp 10.128.0.42:6443: connect: connection refused" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.719760 kubelet[2462]: E0114 01:05:34.719600 2462 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f" Jan 14 01:05:34.719872 containerd[1606]: time="2026-01-14T01:05:34.719632856Z" level=info msg="CreateContainer within sandbox \"eb8fc4ffbacca27a7488056db1fa4bfcdc5fe869c93ad49d87fa6c36bbdd6825\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fdb19c9904e5a36f1bf42424c0432277b954c14155a8e40a7279ca77a146e32e\"" Jan 14 01:05:34.720696 containerd[1606]: time="2026-01-14T01:05:34.720464207Z" level=info msg="StartContainer for \"fdb19c9904e5a36f1bf42424c0432277b954c14155a8e40a7279ca77a146e32e\"" Jan 14 01:05:34.725806 containerd[1606]: time="2026-01-14T01:05:34.725746324Z" level=info msg="connecting to shim fdb19c9904e5a36f1bf42424c0432277b954c14155a8e40a7279ca77a146e32e" address="unix:///run/containerd/s/fc85b5bd2665432b4fa0e7fd02559547591debd3026c9240d348d13281e35a56" protocol=ttrpc version=3 Jan 14 01:05:34.726735 containerd[1606]: time="2026-01-14T01:05:34.726700340Z" level=info msg="CreateContainer within sandbox \"30bdc06f202f1a20b14bc5f94b09a61c733ea47cf84667545e32e51c672ac1df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:05:34.740107 containerd[1606]: time="2026-01-14T01:05:34.739373511Z" level=info msg="Container d28ee3e1a31ee185ebddca9888ae7e531917a09732ec86f1f166a8ba06522016: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:05:34.751306 containerd[1606]: time="2026-01-14T01:05:34.751260123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3,Uid:18ce3955e603ff564e0f3d4884e724ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0d6c4ee9b99fae0ddd691b025e195de297c30c2478d14f20edef7899186de96\"" Jan 14 01:05:34.756877 containerd[1606]: time="2026-01-14T01:05:34.756831291Z" level=info msg="CreateContainer within sandbox \"30bdc06f202f1a20b14bc5f94b09a61c733ea47cf84667545e32e51c672ac1df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d28ee3e1a31ee185ebddca9888ae7e531917a09732ec86f1f166a8ba06522016\"" Jan 14 01:05:34.758088 containerd[1606]: time="2026-01-14T01:05:34.758027272Z" level=info msg="StartContainer for \"d28ee3e1a31ee185ebddca9888ae7e531917a09732ec86f1f166a8ba06522016\"" Jan 14 01:05:34.761425 containerd[1606]: time="2026-01-14T01:05:34.761386996Z" level=info msg="connecting to shim d28ee3e1a31ee185ebddca9888ae7e531917a09732ec86f1f166a8ba06522016" address="unix:///run/containerd/s/b4f8e3b07231126f1e9c53b875c203f0458d830ddd50323453e779fe587a8e83" protocol=ttrpc version=3 Jan 14 01:05:34.762307 containerd[1606]: time="2026-01-14T01:05:34.762278122Z" level=info msg="CreateContainer within sandbox \"f0d6c4ee9b99fae0ddd691b025e195de297c30c2478d14f20edef7899186de96\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:05:34.773592 systemd[1]: Started cri-containerd-fdb19c9904e5a36f1bf42424c0432277b954c14155a8e40a7279ca77a146e32e.scope - libcontainer container fdb19c9904e5a36f1bf42424c0432277b954c14155a8e40a7279ca77a146e32e. Jan 14 01:05:34.784637 containerd[1606]: time="2026-01-14T01:05:34.784591555Z" level=info msg="Container 21ac9dc681a8e93d1d10fd0c219c819e7fc39e3841fcc9e6d09a6deb1021ecb5: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:05:34.799280 containerd[1606]: time="2026-01-14T01:05:34.799239404Z" level=info msg="CreateContainer within sandbox \"f0d6c4ee9b99fae0ddd691b025e195de297c30c2478d14f20edef7899186de96\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"21ac9dc681a8e93d1d10fd0c219c819e7fc39e3841fcc9e6d09a6deb1021ecb5\"" Jan 14 01:05:34.800058 containerd[1606]: time="2026-01-14T01:05:34.800013817Z" level=info msg="StartContainer for \"21ac9dc681a8e93d1d10fd0c219c819e7fc39e3841fcc9e6d09a6deb1021ecb5\"" Jan 14 01:05:34.800544 systemd[1]: Started cri-containerd-d28ee3e1a31ee185ebddca9888ae7e531917a09732ec86f1f166a8ba06522016.scope - libcontainer container d28ee3e1a31ee185ebddca9888ae7e531917a09732ec86f1f166a8ba06522016. Jan 14 01:05:34.810254 containerd[1606]: time="2026-01-14T01:05:34.808914326Z" level=info msg="connecting to shim 21ac9dc681a8e93d1d10fd0c219c819e7fc39e3841fcc9e6d09a6deb1021ecb5" address="unix:///run/containerd/s/ecce1a62ca8f182c0a46a96e1c104a024017dba3ac3f0babc56d269da4b95224" protocol=ttrpc version=3 Jan 14 01:05:34.825000 audit: BPF prog-id=101 op=LOAD Jan 14 01:05:34.828000 audit: BPF prog-id=102 op=LOAD Jan 14 01:05:34.828000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2508 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664623139633939303465356133366631626634323432346330343332 Jan 14 01:05:34.828000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:05:34.828000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664623139633939303465356133366631626634323432346330343332 Jan 14 01:05:34.828000 audit: BPF prog-id=103 op=LOAD Jan 14 01:05:34.828000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2508 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664623139633939303465356133366631626634323432346330343332 Jan 14 01:05:34.829000 audit: BPF prog-id=104 op=LOAD Jan 14 01:05:34.829000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2508 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664623139633939303465356133366631626634323432346330343332 Jan 14 01:05:34.830000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:05:34.830000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664623139633939303465356133366631626634323432346330343332 Jan 14 01:05:34.830000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:05:34.830000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664623139633939303465356133366631626634323432346330343332 Jan 14 01:05:34.830000 audit: BPF prog-id=105 op=LOAD Jan 14 01:05:34.830000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2508 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664623139633939303465356133366631626634323432346330343332 Jan 14 01:05:34.847000 audit: BPF prog-id=106 op=LOAD Jan 14 01:05:34.849000 audit: BPF prog-id=107 op=LOAD Jan 14 01:05:34.849000 audit[2649]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2540 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386565336531613331656531383565626464636139383838616537 Jan 14 01:05:34.849000 audit: BPF prog-id=107 op=UNLOAD Jan 14 01:05:34.849000 audit[2649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386565336531613331656531383565626464636139383838616537 Jan 14 01:05:34.850000 audit: BPF prog-id=108 op=LOAD Jan 14 01:05:34.850000 audit[2649]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2540 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386565336531613331656531383565626464636139383838616537 Jan 14 01:05:34.850000 audit: BPF prog-id=109 op=LOAD Jan 14 01:05:34.850000 audit[2649]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2540 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386565336531613331656531383565626464636139383838616537 Jan 14 01:05:34.850000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:05:34.850000 audit[2649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386565336531613331656531383565626464636139383838616537 Jan 14 01:05:34.850000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:05:34.850000 audit[2649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386565336531613331656531383565626464636139383838616537 Jan 14 01:05:34.850000 audit: BPF prog-id=110 op=LOAD Jan 14 01:05:34.850000 audit[2649]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2540 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386565336531613331656531383565626464636139383838616537 Jan 14 01:05:34.866387 systemd[1]: Started cri-containerd-21ac9dc681a8e93d1d10fd0c219c819e7fc39e3841fcc9e6d09a6deb1021ecb5.scope - libcontainer container 21ac9dc681a8e93d1d10fd0c219c819e7fc39e3841fcc9e6d09a6deb1021ecb5. Jan 14 01:05:34.932137 kernel: kauditd_printk_skb: 182 callbacks suppressed Jan 14 01:05:34.932329 kernel: audit: type=1334 audit(1768352734.922:375): prog-id=111 op=LOAD Jan 14 01:05:34.922000 audit: BPF prog-id=111 op=LOAD Jan 14 01:05:34.928000 audit: BPF prog-id=112 op=LOAD Jan 14 01:05:34.944464 kubelet[2462]: E0114 01:05:34.940112 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:05:34.928000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.959098 containerd[1606]: time="2026-01-14T01:05:34.958668617Z" level=info msg="StartContainer for \"fdb19c9904e5a36f1bf42424c0432277b954c14155a8e40a7279ca77a146e32e\" returns successfully" Jan 14 01:05:34.975771 kernel: audit: type=1334 audit(1768352734.928:376): prog-id=112 op=LOAD Jan 14 01:05:34.975920 kernel: audit: type=1300 audit(1768352734.928:376): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.977990 kernel: audit: type=1327 audit(1768352734.928:376): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:34.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:34.978191 kubelet[2462]: E0114 01:05:34.977749 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:34.928000 audit: BPF prog-id=112 op=UNLOAD Jan 14 01:05:35.014162 kernel: audit: type=1334 audit(1768352734.928:377): prog-id=112 op=UNLOAD Jan 14 01:05:34.928000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:35.048247 kernel: audit: type=1300 audit(1768352734.928:377): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:34.928000 audit: BPF prog-id=113 op=LOAD Jan 14 01:05:35.083226 kubelet[2462]: E0114 01:05:35.081984 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:05:35.087099 kernel: audit: type=1327 audit(1768352734.928:377): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:35.087217 kernel: audit: type=1334 audit(1768352734.928:378): prog-id=113 op=LOAD Jan 14 01:05:34.928000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:35.115284 containerd[1606]: time="2026-01-14T01:05:35.114383468Z" level=info msg="StartContainer for \"d28ee3e1a31ee185ebddca9888ae7e531917a09732ec86f1f166a8ba06522016\" returns successfully" Jan 14 01:05:35.117185 kernel: audit: type=1300 audit(1768352734.928:378): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:35.117316 kernel: audit: type=1327 audit(1768352734.928:378): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:34.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:34.928000 audit: BPF prog-id=114 op=LOAD Jan 14 01:05:34.928000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:34.928000 audit: BPF prog-id=114 op=UNLOAD Jan 14 01:05:34.928000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:34.928000 audit: BPF prog-id=113 op=UNLOAD Jan 14 01:05:34.928000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:34.928000 audit: BPF prog-id=115 op=LOAD Jan 14 01:05:34.928000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2578 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:34.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231616339646336383161386539336431643130666430633231396338 Jan 14 01:05:35.188568 containerd[1606]: time="2026-01-14T01:05:35.188504586Z" level=info msg="StartContainer for \"21ac9dc681a8e93d1d10fd0c219c819e7fc39e3841fcc9e6d09a6deb1021ecb5\" returns successfully" Jan 14 01:05:35.524888 kubelet[2462]: I0114 01:05:35.524573 2462 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:35.980168 kubelet[2462]: E0114 01:05:35.979878 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:35.983852 kubelet[2462]: E0114 01:05:35.983812 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:35.984718 kubelet[2462]: E0114 01:05:35.984683 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:36.989882 kubelet[2462]: E0114 01:05:36.989835 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:36.991087 kubelet[2462]: E0114 01:05:36.991031 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:37.991806 kubelet[2462]: E0114 01:05:37.991764 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:38.046994 kubelet[2462]: E0114 01:05:38.046924 2462 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:38.108154 kubelet[2462]: I0114 01:05:38.107536 2462 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:38.108154 kubelet[2462]: E0114 01:05:38.107590 2462 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\": node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" not found" Jan 14 01:05:38.120872 kubelet[2462]: I0114 01:05:38.120824 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:38.148102 kubelet[2462]: E0114 01:05:38.147601 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:38.148102 kubelet[2462]: I0114 01:05:38.147647 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:38.157597 kubelet[2462]: E0114 01:05:38.157554 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:38.157790 kubelet[2462]: I0114 01:05:38.157774 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:38.163983 kubelet[2462]: E0114 01:05:38.163944 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:38.875585 kubelet[2462]: I0114 01:05:38.875531 2462 apiserver.go:52] "Watching apiserver" Jan 14 01:05:38.921509 kubelet[2462]: I0114 01:05:38.921464 2462 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:05:40.216981 systemd[1]: Reload requested from client PID 2743 ('systemctl') (unit session-8.scope)... Jan 14 01:05:40.217006 systemd[1]: Reloading... Jan 14 01:05:40.365115 zram_generator::config[2790]: No configuration found. Jan 14 01:05:40.705526 systemd[1]: Reloading finished in 487 ms. Jan 14 01:05:40.754034 kubelet[2462]: I0114 01:05:40.752932 2462 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:05:40.753295 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:05:40.764168 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:05:40.764624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:05:40.764734 systemd[1]: kubelet.service: Consumed 1.289s CPU time, 130.9M memory peak. Jan 14 01:05:40.791639 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:05:40.791778 kernel: audit: type=1131 audit(1768352740.763:383): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:40.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:40.769588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:05:40.772000 audit: BPF prog-id=116 op=LOAD Jan 14 01:05:40.772000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:05:40.807372 kernel: audit: type=1334 audit(1768352740.772:384): prog-id=116 op=LOAD Jan 14 01:05:40.807470 kernel: audit: type=1334 audit(1768352740.772:385): prog-id=76 op=UNLOAD Jan 14 01:05:40.808227 kernel: audit: type=1334 audit(1768352740.772:386): prog-id=117 op=LOAD Jan 14 01:05:40.772000 audit: BPF prog-id=117 op=LOAD Jan 14 01:05:40.772000 audit: BPF prog-id=118 op=LOAD Jan 14 01:05:40.830708 kernel: audit: type=1334 audit(1768352740.772:387): prog-id=118 op=LOAD Jan 14 01:05:40.830859 kernel: audit: type=1334 audit(1768352740.772:388): prog-id=77 op=UNLOAD Jan 14 01:05:40.830902 kernel: audit: type=1334 audit(1768352740.772:389): prog-id=78 op=UNLOAD Jan 14 01:05:40.830951 kernel: audit: type=1334 audit(1768352740.773:390): prog-id=119 op=LOAD Jan 14 01:05:40.830987 kernel: audit: type=1334 audit(1768352740.773:391): prog-id=82 op=UNLOAD Jan 14 01:05:40.831029 kernel: audit: type=1334 audit(1768352740.777:392): prog-id=120 op=LOAD Jan 14 01:05:40.772000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:05:40.772000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:05:40.773000 audit: BPF prog-id=119 op=LOAD Jan 14 01:05:40.773000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:05:40.777000 audit: BPF prog-id=120 op=LOAD Jan 14 01:05:40.777000 audit: BPF prog-id=121 op=LOAD Jan 14 01:05:40.777000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:05:40.777000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:05:40.780000 audit: BPF prog-id=122 op=LOAD Jan 14 01:05:40.780000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:05:40.780000 audit: BPF prog-id=123 op=LOAD Jan 14 01:05:40.780000 audit: BPF prog-id=124 op=LOAD Jan 14 01:05:40.780000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:05:40.780000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:05:40.781000 audit: BPF prog-id=125 op=LOAD Jan 14 01:05:40.781000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:05:40.782000 audit: BPF prog-id=126 op=LOAD Jan 14 01:05:40.782000 audit: BPF prog-id=127 op=LOAD Jan 14 01:05:40.782000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:05:40.782000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:05:40.783000 audit: BPF prog-id=128 op=LOAD Jan 14 01:05:40.783000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:05:40.786000 audit: BPF prog-id=129 op=LOAD Jan 14 01:05:40.786000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:05:40.789000 audit: BPF prog-id=130 op=LOAD Jan 14 01:05:40.789000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:05:40.789000 audit: BPF prog-id=131 op=LOAD Jan 14 01:05:40.789000 audit: BPF prog-id=132 op=LOAD Jan 14 01:05:40.789000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:05:40.789000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:05:40.792000 audit: BPF prog-id=133 op=LOAD Jan 14 01:05:40.792000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:05:40.792000 audit: BPF prog-id=134 op=LOAD Jan 14 01:05:40.792000 audit: BPF prog-id=135 op=LOAD Jan 14 01:05:40.792000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:05:40.792000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:05:41.199466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:05:41.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:41.214670 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:05:41.303495 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:05:41.304438 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:05:41.304438 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:05:41.304438 kubelet[2838]: I0114 01:05:41.303875 2838 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:05:41.323940 kubelet[2838]: I0114 01:05:41.323343 2838 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:05:41.323940 kubelet[2838]: I0114 01:05:41.323380 2838 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:05:41.324614 kubelet[2838]: I0114 01:05:41.324585 2838 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:05:41.328478 kubelet[2838]: I0114 01:05:41.328349 2838 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:05:41.333622 kubelet[2838]: I0114 01:05:41.333323 2838 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:05:41.345091 kubelet[2838]: I0114 01:05:41.345037 2838 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:05:41.354985 kubelet[2838]: I0114 01:05:41.354465 2838 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:05:41.354985 kubelet[2838]: I0114 01:05:41.354802 2838 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:05:41.355254 kubelet[2838]: I0114 01:05:41.354847 2838 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:05:41.355254 kubelet[2838]: I0114 01:05:41.355235 2838 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:05:41.355254 kubelet[2838]: I0114 01:05:41.355257 2838 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:05:41.355489 kubelet[2838]: I0114 01:05:41.355320 2838 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:05:41.355538 kubelet[2838]: I0114 01:05:41.355527 2838 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:05:41.355583 kubelet[2838]: I0114 01:05:41.355546 2838 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:05:41.355583 kubelet[2838]: I0114 01:05:41.355577 2838 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:05:41.355672 kubelet[2838]: I0114 01:05:41.355599 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:05:41.363559 kubelet[2838]: I0114 01:05:41.363517 2838 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:05:41.365812 kubelet[2838]: I0114 01:05:41.365336 2838 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:05:41.417103 kubelet[2838]: I0114 01:05:41.415481 2838 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:05:41.417103 kubelet[2838]: I0114 01:05:41.415548 2838 server.go:1289] "Started kubelet" Jan 14 01:05:41.417103 kubelet[2838]: I0114 01:05:41.415893 2838 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:05:41.420725 kubelet[2838]: I0114 01:05:41.420377 2838 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:05:41.428408 kubelet[2838]: I0114 01:05:41.415916 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:05:41.428408 kubelet[2838]: I0114 01:05:41.426391 2838 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:05:41.428408 kubelet[2838]: I0114 01:05:41.425338 2838 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:05:41.428408 kubelet[2838]: I0114 01:05:41.425175 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:05:41.447178 kubelet[2838]: I0114 01:05:41.445858 2838 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:05:41.450347 kubelet[2838]: I0114 01:05:41.446005 2838 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:05:41.450347 kubelet[2838]: I0114 01:05:41.449564 2838 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:05:41.459284 kubelet[2838]: I0114 01:05:41.457638 2838 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:05:41.459284 kubelet[2838]: I0114 01:05:41.458829 2838 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:05:41.459977 kubelet[2838]: E0114 01:05:41.458223 2838 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:05:41.467137 kubelet[2838]: I0114 01:05:41.466354 2838 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:05:41.513843 kubelet[2838]: I0114 01:05:41.511230 2838 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:05:41.520479 kubelet[2838]: I0114 01:05:41.519696 2838 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:05:41.520479 kubelet[2838]: I0114 01:05:41.519739 2838 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:05:41.520479 kubelet[2838]: I0114 01:05:41.519770 2838 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:05:41.520479 kubelet[2838]: I0114 01:05:41.519781 2838 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:05:41.520479 kubelet[2838]: E0114 01:05:41.519839 2838 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:05:41.619382 kubelet[2838]: I0114 01:05:41.619305 2838 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:05:41.619382 kubelet[2838]: I0114 01:05:41.619330 2838 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:05:41.619382 kubelet[2838]: I0114 01:05:41.619383 2838 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:05:41.619626 kubelet[2838]: I0114 01:05:41.619592 2838 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:05:41.619677 kubelet[2838]: I0114 01:05:41.619618 2838 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:05:41.619677 kubelet[2838]: I0114 01:05:41.619646 2838 policy_none.go:49] "None policy: Start" Jan 14 01:05:41.619677 kubelet[2838]: I0114 01:05:41.619662 2838 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:05:41.619860 kubelet[2838]: I0114 01:05:41.619679 2838 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:05:41.619860 kubelet[2838]: I0114 01:05:41.619828 2838 state_mem.go:75] "Updated machine memory state" Jan 14 01:05:41.621129 kubelet[2838]: E0114 01:05:41.619952 2838 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 01:05:41.627063 kubelet[2838]: E0114 01:05:41.626877 2838 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:05:41.627908 kubelet[2838]: I0114 01:05:41.627161 2838 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:05:41.627908 kubelet[2838]: I0114 01:05:41.627188 2838 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:05:41.627908 kubelet[2838]: I0114 01:05:41.627764 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:05:41.639966 kubelet[2838]: E0114 01:05:41.639932 2838 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:05:41.749058 kubelet[2838]: I0114 01:05:41.748330 2838 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.761022 kubelet[2838]: I0114 01:05:41.760681 2838 kubelet_node_status.go:124] "Node was previously registered" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.761022 kubelet[2838]: I0114 01:05:41.760784 2838 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.824102 kubelet[2838]: I0114 01:05:41.821610 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.824102 kubelet[2838]: I0114 01:05:41.822250 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.824102 kubelet[2838]: I0114 01:05:41.822619 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.838303 kubelet[2838]: I0114 01:05:41.838229 2838 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 14 01:05:41.858528 kubelet[2838]: I0114 01:05:41.857751 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8bf4ef7793709ae84c12370217c6ca9-ca-certs\") pod \"kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f8bf4ef7793709ae84c12370217c6ca9\") " pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.858528 kubelet[2838]: I0114 01:05:41.857811 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8bf4ef7793709ae84c12370217c6ca9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f8bf4ef7793709ae84c12370217c6ca9\") " pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.858528 kubelet[2838]: I0114 01:05:41.857846 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.858528 kubelet[2838]: I0114 01:05:41.857878 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8bf4ef7793709ae84c12370217c6ca9-k8s-certs\") pod \"kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f8bf4ef7793709ae84c12370217c6ca9\") " pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.858871 kubelet[2838]: I0114 01:05:41.857911 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-ca-certs\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.858871 kubelet[2838]: I0114 01:05:41.857939 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-flexvolume-dir\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.859527 kubelet[2838]: I0114 01:05:41.859354 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-k8s-certs\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.859527 kubelet[2838]: I0114 01:05:41.859419 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f71be2024290dbe9ed6d15121bf496f0-kubeconfig\") pod \"kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"f71be2024290dbe9ed6d15121bf496f0\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:41.859527 kubelet[2838]: I0114 01:05:41.859454 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18ce3955e603ff564e0f3d4884e724ab-kubeconfig\") pod \"kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" (UID: \"18ce3955e603ff564e0f3d4884e724ab\") " pod="kube-system/kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:42.356798 kubelet[2838]: I0114 01:05:42.356749 2838 apiserver.go:52] "Watching apiserver" Jan 14 01:05:42.450565 kubelet[2838]: I0114 01:05:42.450500 2838 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:05:42.509253 kubelet[2838]: I0114 01:05:42.509041 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" podStartSLOduration=1.50898629 podStartE2EDuration="1.50898629s" podCreationTimestamp="2026-01-14 01:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:05:42.507594969 +0000 UTC m=+1.285236796" watchObservedRunningTime="2026-01-14 01:05:42.50898629 +0000 UTC m=+1.286628093" Jan 14 01:05:42.549234 kubelet[2838]: I0114 01:05:42.549014 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" podStartSLOduration=1.5489899230000002 podStartE2EDuration="1.548989923s" podCreationTimestamp="2026-01-14 01:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:05:42.54850756 +0000 UTC m=+1.326149374" watchObservedRunningTime="2026-01-14 01:05:42.548989923 +0000 UTC m=+1.326631736" Jan 14 01:05:42.549848 kubelet[2838]: I0114 01:05:42.549488 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" podStartSLOduration=1.5494719 podStartE2EDuration="1.5494719s" podCreationTimestamp="2026-01-14 01:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:05:42.534684399 +0000 UTC m=+1.312326212" watchObservedRunningTime="2026-01-14 01:05:42.5494719 +0000 UTC m=+1.327113711" Jan 14 01:05:42.595933 kubelet[2838]: I0114 01:05:42.594901 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:42.606101 kubelet[2838]: E0114 01:05:42.604665 2838 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" already exists" pod="kube-system/kube-apiserver-ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:05:44.287919 update_engine[1573]: I20260114 01:05:44.287815 1573 update_attempter.cc:509] Updating boot flags... Jan 14 01:05:46.082056 kubelet[2838]: I0114 01:05:46.082013 2838 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:05:46.083729 containerd[1606]: time="2026-01-14T01:05:46.083680507Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:05:46.084632 kubelet[2838]: I0114 01:05:46.083997 2838 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:05:47.065942 systemd[1]: Created slice kubepods-besteffort-pod339ffa3f_7d77_46f4_af72_ffa51cec5c37.slice - libcontainer container kubepods-besteffort-pod339ffa3f_7d77_46f4_af72_ffa51cec5c37.slice. Jan 14 01:05:47.117819 kubelet[2838]: I0114 01:05:47.117729 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/339ffa3f-7d77-46f4-af72-ffa51cec5c37-var-lib-calico\") pod \"tigera-operator-7dcd859c48-d297s\" (UID: \"339ffa3f-7d77-46f4-af72-ffa51cec5c37\") " pod="tigera-operator/tigera-operator-7dcd859c48-d297s" Jan 14 01:05:47.117819 kubelet[2838]: I0114 01:05:47.117795 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rvpf\" (UniqueName: \"kubernetes.io/projected/339ffa3f-7d77-46f4-af72-ffa51cec5c37-kube-api-access-8rvpf\") pod \"tigera-operator-7dcd859c48-d297s\" (UID: \"339ffa3f-7d77-46f4-af72-ffa51cec5c37\") " pod="tigera-operator/tigera-operator-7dcd859c48-d297s" Jan 14 01:05:47.219382 systemd[1]: Created slice kubepods-besteffort-pod31f87ec6_3444_4cbf_9f39_e013d73c35a5.slice - libcontainer container kubepods-besteffort-pod31f87ec6_3444_4cbf_9f39_e013d73c35a5.slice. Jan 14 01:05:47.319909 kubelet[2838]: I0114 01:05:47.319448 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f87ec6-3444-4cbf-9f39-e013d73c35a5-xtables-lock\") pod \"kube-proxy-vhttf\" (UID: \"31f87ec6-3444-4cbf-9f39-e013d73c35a5\") " pod="kube-system/kube-proxy-vhttf" Jan 14 01:05:47.319909 kubelet[2838]: I0114 01:05:47.319512 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np7dp\" (UniqueName: \"kubernetes.io/projected/31f87ec6-3444-4cbf-9f39-e013d73c35a5-kube-api-access-np7dp\") pod \"kube-proxy-vhttf\" (UID: \"31f87ec6-3444-4cbf-9f39-e013d73c35a5\") " pod="kube-system/kube-proxy-vhttf" Jan 14 01:05:47.319909 kubelet[2838]: I0114 01:05:47.319554 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f87ec6-3444-4cbf-9f39-e013d73c35a5-lib-modules\") pod \"kube-proxy-vhttf\" (UID: \"31f87ec6-3444-4cbf-9f39-e013d73c35a5\") " pod="kube-system/kube-proxy-vhttf" Jan 14 01:05:47.319909 kubelet[2838]: I0114 01:05:47.319597 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31f87ec6-3444-4cbf-9f39-e013d73c35a5-kube-proxy\") pod \"kube-proxy-vhttf\" (UID: \"31f87ec6-3444-4cbf-9f39-e013d73c35a5\") " pod="kube-system/kube-proxy-vhttf" Jan 14 01:05:47.377097 containerd[1606]: time="2026-01-14T01:05:47.377005647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-d297s,Uid:339ffa3f-7d77-46f4-af72-ffa51cec5c37,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:05:47.412400 containerd[1606]: time="2026-01-14T01:05:47.411322228Z" level=info msg="connecting to shim 30a20225c44166c68154350b195a594c7a7d63064f7ffff43260f23f74cabbff" address="unix:///run/containerd/s/f80799b943d32d4d7acc784a08ecc6036861bb15e45d11009ed72a181c190f6f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:05:47.470767 systemd[1]: Started cri-containerd-30a20225c44166c68154350b195a594c7a7d63064f7ffff43260f23f74cabbff.scope - libcontainer container 30a20225c44166c68154350b195a594c7a7d63064f7ffff43260f23f74cabbff. Jan 14 01:05:47.502317 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:05:47.502497 kernel: audit: type=1334 audit(1768352747.488:425): prog-id=136 op=LOAD Jan 14 01:05:47.488000 audit: BPF prog-id=136 op=LOAD Jan 14 01:05:47.502000 audit: BPF prog-id=137 op=LOAD Jan 14 01:05:47.502000 audit[2934]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.528325 containerd[1606]: time="2026-01-14T01:05:47.528275254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhttf,Uid:31f87ec6-3444-4cbf-9f39-e013d73c35a5,Namespace:kube-system,Attempt:0,}" Jan 14 01:05:47.540450 kernel: audit: type=1334 audit(1768352747.502:426): prog-id=137 op=LOAD Jan 14 01:05:47.540749 kernel: audit: type=1300 audit(1768352747.502:426): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.540804 kernel: audit: type=1327 audit(1768352747.502:426): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.502000 audit: BPF prog-id=137 op=UNLOAD Jan 14 01:05:47.576876 kernel: audit: type=1334 audit(1768352747.502:427): prog-id=137 op=UNLOAD Jan 14 01:05:47.502000 audit[2934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.635792 kernel: audit: type=1300 audit(1768352747.502:427): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.637527 kernel: audit: type=1327 audit(1768352747.502:427): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.637584 kernel: audit: type=1334 audit(1768352747.502:428): prog-id=138 op=LOAD Jan 14 01:05:47.502000 audit: BPF prog-id=138 op=LOAD Jan 14 01:05:47.502000 audit[2934]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.673807 kernel: audit: type=1300 audit(1768352747.502:428): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.674012 kernel: audit: type=1327 audit(1768352747.502:428): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.502000 audit: BPF prog-id=139 op=LOAD Jan 14 01:05:47.502000 audit[2934]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.502000 audit: BPF prog-id=139 op=UNLOAD Jan 14 01:05:47.502000 audit[2934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.502000 audit: BPF prog-id=138 op=UNLOAD Jan 14 01:05:47.502000 audit[2934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.502000 audit: BPF prog-id=140 op=LOAD Jan 14 01:05:47.502000 audit[2934]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2922 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613230323235633434313636633638313534333530623139356135 Jan 14 01:05:47.707016 containerd[1606]: time="2026-01-14T01:05:47.706956053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-d297s,Uid:339ffa3f-7d77-46f4-af72-ffa51cec5c37,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"30a20225c44166c68154350b195a594c7a7d63064f7ffff43260f23f74cabbff\"" Jan 14 01:05:47.709511 containerd[1606]: time="2026-01-14T01:05:47.709476124Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:05:47.734744 containerd[1606]: time="2026-01-14T01:05:47.734612709Z" level=info msg="connecting to shim aedf4f235e0258bf2603b5eff9e677375cf1909ec79c079accf88f98eb2f772a" address="unix:///run/containerd/s/9d0d5c17c11fae822f1030483a90da2c07a15c0441205b66b04bcab1b89cfbca" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:05:47.774373 systemd[1]: Started cri-containerd-aedf4f235e0258bf2603b5eff9e677375cf1909ec79c079accf88f98eb2f772a.scope - libcontainer container aedf4f235e0258bf2603b5eff9e677375cf1909ec79c079accf88f98eb2f772a. Jan 14 01:05:47.791000 audit: BPF prog-id=141 op=LOAD Jan 14 01:05:47.791000 audit: BPF prog-id=142 op=LOAD Jan 14 01:05:47.791000 audit[2981]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2969 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646634663233356530323538626632363033623565666639653637 Jan 14 01:05:47.791000 audit: BPF prog-id=142 op=UNLOAD Jan 14 01:05:47.791000 audit[2981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646634663233356530323538626632363033623565666639653637 Jan 14 01:05:47.791000 audit: BPF prog-id=143 op=LOAD Jan 14 01:05:47.791000 audit[2981]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2969 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646634663233356530323538626632363033623565666639653637 Jan 14 01:05:47.792000 audit: BPF prog-id=144 op=LOAD Jan 14 01:05:47.792000 audit[2981]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2969 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646634663233356530323538626632363033623565666639653637 Jan 14 01:05:47.792000 audit: BPF prog-id=144 op=UNLOAD Jan 14 01:05:47.792000 audit[2981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646634663233356530323538626632363033623565666639653637 Jan 14 01:05:47.792000 audit: BPF prog-id=143 op=UNLOAD Jan 14 01:05:47.792000 audit[2981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646634663233356530323538626632363033623565666639653637 Jan 14 01:05:47.792000 audit: BPF prog-id=145 op=LOAD Jan 14 01:05:47.792000 audit[2981]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2969 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646634663233356530323538626632363033623565666639653637 Jan 14 01:05:47.819652 containerd[1606]: time="2026-01-14T01:05:47.819598249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhttf,Uid:31f87ec6-3444-4cbf-9f39-e013d73c35a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"aedf4f235e0258bf2603b5eff9e677375cf1909ec79c079accf88f98eb2f772a\"" Jan 14 01:05:47.826963 containerd[1606]: time="2026-01-14T01:05:47.826909874Z" level=info msg="CreateContainer within sandbox \"aedf4f235e0258bf2603b5eff9e677375cf1909ec79c079accf88f98eb2f772a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:05:47.840153 containerd[1606]: time="2026-01-14T01:05:47.839608996Z" level=info msg="Container 5da5e4d2dbe922599530de8db4ad127b576342d7b97a6991adccefad5c0703bf: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:05:47.850424 containerd[1606]: time="2026-01-14T01:05:47.850358420Z" level=info msg="CreateContainer within sandbox \"aedf4f235e0258bf2603b5eff9e677375cf1909ec79c079accf88f98eb2f772a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5da5e4d2dbe922599530de8db4ad127b576342d7b97a6991adccefad5c0703bf\"" Jan 14 01:05:47.851265 containerd[1606]: time="2026-01-14T01:05:47.851224437Z" level=info msg="StartContainer for \"5da5e4d2dbe922599530de8db4ad127b576342d7b97a6991adccefad5c0703bf\"" Jan 14 01:05:47.854144 containerd[1606]: time="2026-01-14T01:05:47.854100016Z" level=info msg="connecting to shim 5da5e4d2dbe922599530de8db4ad127b576342d7b97a6991adccefad5c0703bf" address="unix:///run/containerd/s/9d0d5c17c11fae822f1030483a90da2c07a15c0441205b66b04bcab1b89cfbca" protocol=ttrpc version=3 Jan 14 01:05:47.881419 systemd[1]: Started cri-containerd-5da5e4d2dbe922599530de8db4ad127b576342d7b97a6991adccefad5c0703bf.scope - libcontainer container 5da5e4d2dbe922599530de8db4ad127b576342d7b97a6991adccefad5c0703bf. Jan 14 01:05:47.962000 audit: BPF prog-id=146 op=LOAD Jan 14 01:05:47.962000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2969 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613565346432646265393232353939353330646538646234616431 Jan 14 01:05:47.962000 audit: BPF prog-id=147 op=LOAD Jan 14 01:05:47.962000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2969 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613565346432646265393232353939353330646538646234616431 Jan 14 01:05:47.962000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:05:47.962000 audit[3007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613565346432646265393232353939353330646538646234616431 Jan 14 01:05:47.963000 audit: BPF prog-id=146 op=UNLOAD Jan 14 01:05:47.963000 audit[3007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613565346432646265393232353939353330646538646234616431 Jan 14 01:05:47.963000 audit: BPF prog-id=148 op=LOAD Jan 14 01:05:47.963000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2969 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:47.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613565346432646265393232353939353330646538646234616431 Jan 14 01:05:47.996612 containerd[1606]: time="2026-01-14T01:05:47.996486935Z" level=info msg="StartContainer for \"5da5e4d2dbe922599530de8db4ad127b576342d7b97a6991adccefad5c0703bf\" returns successfully" Jan 14 01:05:48.168000 audit[3071]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.168000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd392835e0 a2=0 a3=7ffd392835cc items=0 ppid=3020 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.168000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:05:48.177000 audit[3073]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.177000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda6333870 a2=0 a3=7ffda633385c items=0 ppid=3020 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.177000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:05:48.182000 audit[3076]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.182000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8ba59990 a2=0 a3=7ffc8ba5997c items=0 ppid=3020 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.182000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:05:48.188000 audit[3077]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.188000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffceb493d30 a2=0 a3=7ffceb493d1c items=0 ppid=3020 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:05:48.191000 audit[3078]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.191000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc812e64e0 a2=0 a3=7ffc812e64cc items=0 ppid=3020 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:05:48.194000 audit[3079]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.194000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc221afe40 a2=0 a3=7ffc221afe2c items=0 ppid=3020 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:05:48.282000 audit[3080]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.282000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdee503db0 a2=0 a3=7ffdee503d9c items=0 ppid=3020 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.282000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:05:48.288000 audit[3082]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.288000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc2020e500 a2=0 a3=7ffc2020e4ec items=0 ppid=3020 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:05:48.295000 audit[3085]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.295000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc96707550 a2=0 a3=7ffc9670753c items=0 ppid=3020 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:05:48.297000 audit[3086]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.297000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffea32ac80 a2=0 a3=7fffea32ac6c items=0 ppid=3020 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:05:48.301000 audit[3088]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.301000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1db1ee60 a2=0 a3=7ffd1db1ee4c items=0 ppid=3020 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:05:48.303000 audit[3089]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.303000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4faab450 a2=0 a3=7fff4faab43c items=0 ppid=3020 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:05:48.308000 audit[3091]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.308000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe0f3f1ad0 a2=0 a3=7ffe0f3f1abc items=0 ppid=3020 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.308000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:05:48.314000 audit[3094]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.314000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffbcef8140 a2=0 a3=7fffbcef812c items=0 ppid=3020 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:05:48.316000 audit[3095]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.316000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc630982f0 a2=0 a3=7ffc630982dc items=0 ppid=3020 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:05:48.320000 audit[3097]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.320000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcadadb210 a2=0 a3=7ffcadadb1fc items=0 ppid=3020 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:05:48.322000 audit[3098]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.322000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc06febd60 a2=0 a3=7ffc06febd4c items=0 ppid=3020 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.322000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:05:48.326000 audit[3100]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.326000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9fe1c500 a2=0 a3=7ffe9fe1c4ec items=0 ppid=3020 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:05:48.332000 audit[3103]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.332000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd32fa590 a2=0 a3=7fffd32fa57c items=0 ppid=3020 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:05:48.338000 audit[3106]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.338000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc89c068e0 a2=0 a3=7ffc89c068cc items=0 ppid=3020 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:05:48.340000 audit[3107]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.340000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe04137c70 a2=0 a3=7ffe04137c5c items=0 ppid=3020 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:05:48.344000 audit[3109]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.344000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd38c8eb00 a2=0 a3=7ffd38c8eaec items=0 ppid=3020 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:05:48.351000 audit[3112]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.351000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5b27d2f0 a2=0 a3=7ffc5b27d2dc items=0 ppid=3020 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:05:48.353000 audit[3113]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.353000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccbc075d0 a2=0 a3=7ffccbc075bc items=0 ppid=3020 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:05:48.358000 audit[3115]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:05:48.358000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffde45a3610 a2=0 a3=7ffde45a35fc items=0 ppid=3020 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:05:48.390000 audit[3121]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:05:48.390000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf10b7dc0 a2=0 a3=7ffcf10b7dac items=0 ppid=3020 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:05:48.402000 audit[3121]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:05:48.402000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcf10b7dc0 a2=0 a3=7ffcf10b7dac items=0 ppid=3020 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:05:48.404000 audit[3126]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.404000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc993b4340 a2=0 a3=7ffc993b432c items=0 ppid=3020 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:05:48.410000 audit[3128]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.410000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffec2dad460 a2=0 a3=7ffec2dad44c items=0 ppid=3020 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:05:48.417000 audit[3131]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.417000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffd209bb30 a2=0 a3=7fffd209bb1c items=0 ppid=3020 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:05:48.419000 audit[3132]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.419000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb039abc0 a2=0 a3=7ffeb039abac items=0 ppid=3020 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:05:48.423000 audit[3134]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.423000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff16ea6dd0 a2=0 a3=7fff16ea6dbc items=0 ppid=3020 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:05:48.425000 audit[3135]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.425000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeca47c950 a2=0 a3=7ffeca47c93c items=0 ppid=3020 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:05:48.430000 audit[3137]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.430000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffda3d665f0 a2=0 a3=7ffda3d665dc items=0 ppid=3020 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:05:48.436000 audit[3140]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.436000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc3b1c7b70 a2=0 a3=7ffc3b1c7b5c items=0 ppid=3020 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:05:48.438000 audit[3141]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.438000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde2070ed0 a2=0 a3=7ffde2070ebc items=0 ppid=3020 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.438000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:05:48.443000 audit[3143]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.443000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc222feaf0 a2=0 a3=7ffc222feadc items=0 ppid=3020 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:05:48.445000 audit[3144]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.445000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc958c7c40 a2=0 a3=7ffc958c7c2c items=0 ppid=3020 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.445000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:05:48.450000 audit[3146]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.450000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc2d2e9810 a2=0 a3=7ffc2d2e97fc items=0 ppid=3020 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.450000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:05:48.456000 audit[3149]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.456000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb3e4dc10 a2=0 a3=7ffeb3e4dbfc items=0 ppid=3020 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.456000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:05:48.464000 audit[3152]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.464000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe97154330 a2=0 a3=7ffe9715431c items=0 ppid=3020 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:05:48.466000 audit[3153]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.466000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdcd8120c0 a2=0 a3=7ffdcd8120ac items=0 ppid=3020 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:05:48.470000 audit[3155]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.470000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdc6ab6f50 a2=0 a3=7ffdc6ab6f3c items=0 ppid=3020 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:05:48.476000 audit[3158]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.476000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd894d25c0 a2=0 a3=7ffd894d25ac items=0 ppid=3020 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:05:48.478000 audit[3159]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.478000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffbfbe4f0 a2=0 a3=7ffffbfbe4dc items=0 ppid=3020 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:05:48.482000 audit[3161]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.482000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd5efe63b0 a2=0 a3=7ffd5efe639c items=0 ppid=3020 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:05:48.485000 audit[3162]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.485000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe02e02390 a2=0 a3=7ffe02e0237c items=0 ppid=3020 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.485000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:05:48.489000 audit[3164]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.489000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe9fd3d8e0 a2=0 a3=7ffe9fd3d8cc items=0 ppid=3020 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:05:48.497000 audit[3167]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:05:48.497000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd561fc000 a2=0 a3=7ffd561fbfec items=0 ppid=3020 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:05:48.504000 audit[3169]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:05:48.504000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff254d11c0 a2=0 a3=7fff254d11ac items=0 ppid=3020 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.504000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:05:48.505000 audit[3169]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:05:48.505000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff254d11c0 a2=0 a3=7fff254d11ac items=0 ppid=3020 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:48.505000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:05:48.654885 kubelet[2838]: I0114 01:05:48.654794 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vhttf" podStartSLOduration=1.654768962 podStartE2EDuration="1.654768962s" podCreationTimestamp="2026-01-14 01:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:05:48.640429437 +0000 UTC m=+7.418071254" watchObservedRunningTime="2026-01-14 01:05:48.654768962 +0000 UTC m=+7.432410775" Jan 14 01:05:49.527788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728408584.mount: Deactivated successfully. Jan 14 01:05:50.437340 containerd[1606]: time="2026-01-14T01:05:50.437278791Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:50.439004 containerd[1606]: time="2026-01-14T01:05:50.438812522Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 01:05:50.440883 containerd[1606]: time="2026-01-14T01:05:50.440832892Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:50.446204 containerd[1606]: time="2026-01-14T01:05:50.446124083Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:05:50.447452 containerd[1606]: time="2026-01-14T01:05:50.447402532Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.737722975s" Jan 14 01:05:50.447620 containerd[1606]: time="2026-01-14T01:05:50.447455384Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:05:50.453354 containerd[1606]: time="2026-01-14T01:05:50.452898353Z" level=info msg="CreateContainer within sandbox \"30a20225c44166c68154350b195a594c7a7d63064f7ffff43260f23f74cabbff\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:05:50.466308 containerd[1606]: time="2026-01-14T01:05:50.466218511Z" level=info msg="Container 34729962bb0a8b2658203e546aaa4a2eb1f80cc25b20b1d3608c1734ee2e7986: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:05:50.476774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508436535.mount: Deactivated successfully. Jan 14 01:05:50.480033 containerd[1606]: time="2026-01-14T01:05:50.479983537Z" level=info msg="CreateContainer within sandbox \"30a20225c44166c68154350b195a594c7a7d63064f7ffff43260f23f74cabbff\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34729962bb0a8b2658203e546aaa4a2eb1f80cc25b20b1d3608c1734ee2e7986\"" Jan 14 01:05:50.481366 containerd[1606]: time="2026-01-14T01:05:50.481271218Z" level=info msg="StartContainer for \"34729962bb0a8b2658203e546aaa4a2eb1f80cc25b20b1d3608c1734ee2e7986\"" Jan 14 01:05:50.483024 containerd[1606]: time="2026-01-14T01:05:50.482920602Z" level=info msg="connecting to shim 34729962bb0a8b2658203e546aaa4a2eb1f80cc25b20b1d3608c1734ee2e7986" address="unix:///run/containerd/s/f80799b943d32d4d7acc784a08ecc6036861bb15e45d11009ed72a181c190f6f" protocol=ttrpc version=3 Jan 14 01:05:50.520427 systemd[1]: Started cri-containerd-34729962bb0a8b2658203e546aaa4a2eb1f80cc25b20b1d3608c1734ee2e7986.scope - libcontainer container 34729962bb0a8b2658203e546aaa4a2eb1f80cc25b20b1d3608c1734ee2e7986. Jan 14 01:05:50.540000 audit: BPF prog-id=149 op=LOAD Jan 14 01:05:50.541000 audit: BPF prog-id=150 op=LOAD Jan 14 01:05:50.541000 audit[3178]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2922 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:50.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373239393632626230613862323635383230336535343661616134 Jan 14 01:05:50.541000 audit: BPF prog-id=150 op=UNLOAD Jan 14 01:05:50.541000 audit[3178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2922 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:50.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373239393632626230613862323635383230336535343661616134 Jan 14 01:05:50.541000 audit: BPF prog-id=151 op=LOAD Jan 14 01:05:50.541000 audit[3178]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2922 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:50.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373239393632626230613862323635383230336535343661616134 Jan 14 01:05:50.541000 audit: BPF prog-id=152 op=LOAD Jan 14 01:05:50.541000 audit[3178]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2922 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:50.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373239393632626230613862323635383230336535343661616134 Jan 14 01:05:50.541000 audit: BPF prog-id=152 op=UNLOAD Jan 14 01:05:50.541000 audit[3178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2922 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:50.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373239393632626230613862323635383230336535343661616134 Jan 14 01:05:50.541000 audit: BPF prog-id=151 op=UNLOAD Jan 14 01:05:50.541000 audit[3178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2922 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:50.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373239393632626230613862323635383230336535343661616134 Jan 14 01:05:50.541000 audit: BPF prog-id=153 op=LOAD Jan 14 01:05:50.541000 audit[3178]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2922 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:50.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373239393632626230613862323635383230336535343661616134 Jan 14 01:05:50.573997 containerd[1606]: time="2026-01-14T01:05:50.573915047Z" level=info msg="StartContainer for \"34729962bb0a8b2658203e546aaa4a2eb1f80cc25b20b1d3608c1734ee2e7986\" returns successfully" Jan 14 01:05:50.648023 kubelet[2838]: I0114 01:05:50.647793 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-d297s" podStartSLOduration=1.908063839 podStartE2EDuration="4.647766663s" podCreationTimestamp="2026-01-14 01:05:46 +0000 UTC" firstStartedPulling="2026-01-14 01:05:47.709039392 +0000 UTC m=+6.486681198" lastFinishedPulling="2026-01-14 01:05:50.448742236 +0000 UTC m=+9.226384022" observedRunningTime="2026-01-14 01:05:50.647041794 +0000 UTC m=+9.424683608" watchObservedRunningTime="2026-01-14 01:05:50.647766663 +0000 UTC m=+9.425408476" Jan 14 01:05:57.777168 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 01:05:57.777303 kernel: audit: type=1106 audit(1768352757.746:505): pid=1888 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:57.777343 kernel: audit: type=1104 audit(1768352757.752:506): pid=1888 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:57.746000 audit[1888]: USER_END pid=1888 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:57.752000 audit[1888]: CRED_DISP pid=1888 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:05:57.747397 sudo[1888]: pam_unix(sudo:session): session closed for user root Jan 14 01:05:57.850744 sshd[1887]: Connection closed by 4.153.228.146 port 40952 Jan 14 01:05:57.853332 sshd-session[1883]: pam_unix(sshd:session): session closed for user core Jan 14 01:05:57.897870 kernel: audit: type=1106 audit(1768352757.856:507): pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:57.856000 audit[1883]: USER_END pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:57.901428 systemd[1]: sshd@6-10.128.0.42:22-4.153.228.146:40952.service: Deactivated successfully. Jan 14 01:05:57.856000 audit[1883]: CRED_DISP pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:57.908601 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:05:57.910245 systemd[1]: session-8.scope: Consumed 8.129s CPU time, 232.6M memory peak. Jan 14 01:05:57.919994 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:05:57.927921 systemd-logind[1572]: Removed session 8. Jan 14 01:05:57.929369 kernel: audit: type=1104 audit(1768352757.856:508): pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:05:57.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.42:22-4.153.228.146:40952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:57.955140 kernel: audit: type=1131 audit(1768352757.902:509): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.42:22-4.153.228.146:40952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:58.701000 audit[3259]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:05:58.722342 kernel: audit: type=1325 audit(1768352758.701:510): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:05:58.701000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff0320b320 a2=0 a3=7fff0320b30c items=0 ppid=3020 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:58.762102 kernel: audit: type=1300 audit(1768352758.701:510): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff0320b320 a2=0 a3=7fff0320b30c items=0 ppid=3020 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:58.783765 kernel: audit: type=1327 audit(1768352758.701:510): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:05:58.701000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:05:58.723000 audit[3259]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:05:58.802127 kernel: audit: type=1325 audit(1768352758.723:511): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:05:58.723000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0320b320 a2=0 a3=0 items=0 ppid=3020 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:58.835140 kernel: audit: type=1300 audit(1768352758.723:511): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0320b320 a2=0 a3=0 items=0 ppid=3020 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:58.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:05:58.807000 audit[3261]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:05:58.807000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe5b2bc1d0 a2=0 a3=7ffe5b2bc1bc items=0 ppid=3020 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:58.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:05:58.849000 audit[3261]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:05:58.849000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5b2bc1d0 a2=0 a3=0 items=0 ppid=3020 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:05:58.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:03.459429 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:06:03.459570 kernel: audit: type=1325 audit(1768352763.436:514): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:03.436000 audit[3266]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:03.436000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff4688cde0 a2=0 a3=7fff4688cdcc items=0 ppid=3020 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:03.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:03.513140 kernel: audit: type=1300 audit(1768352763.436:514): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff4688cde0 a2=0 a3=7fff4688cdcc items=0 ppid=3020 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:03.513277 kernel: audit: type=1327 audit(1768352763.436:514): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:03.496000 audit[3266]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:03.529647 kernel: audit: type=1325 audit(1768352763.496:515): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:03.496000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4688cde0 a2=0 a3=0 items=0 ppid=3020 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:03.562113 kernel: audit: type=1300 audit(1768352763.496:515): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4688cde0 a2=0 a3=0 items=0 ppid=3020 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:03.496000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:03.580105 kernel: audit: type=1327 audit(1768352763.496:515): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:03.567000 audit[3268]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:03.597103 kernel: audit: type=1325 audit(1768352763.567:516): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:03.597277 kernel: audit: type=1300 audit(1768352763.567:516): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe56b58cc0 a2=0 a3=7ffe56b58cac items=0 ppid=3020 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:03.567000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe56b58cc0 a2=0 a3=7ffe56b58cac items=0 ppid=3020 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:03.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:03.648409 kernel: audit: type=1327 audit(1768352763.567:516): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:03.650000 audit[3268]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:03.668134 kernel: audit: type=1325 audit(1768352763.650:517): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:03.650000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe56b58cc0 a2=0 a3=0 items=0 ppid=3020 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:03.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:04.752000 audit[3271]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:04.752000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbbeb1160 a2=0 a3=7ffdbbeb114c items=0 ppid=3020 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:04.752000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:04.756000 audit[3271]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:04.756000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbbeb1160 a2=0 a3=0 items=0 ppid=3020 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:04.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:05.482207 systemd[1]: Created slice kubepods-besteffort-pod0614a580_9517_4496_9952_f3590f9c2a9b.slice - libcontainer container kubepods-besteffort-pod0614a580_9517_4496_9952_f3590f9c2a9b.slice. Jan 14 01:06:05.511000 audit[3274]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:05.511000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe53e44b20 a2=0 a3=7ffe53e44b0c items=0 ppid=3020 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:05.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:05.525000 audit[3274]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:05.525000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe53e44b20 a2=0 a3=0 items=0 ppid=3020 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:05.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:05.554214 kubelet[2838]: I0114 01:06:05.554129 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0614a580-9517-4496-9952-f3590f9c2a9b-typha-certs\") pod \"calico-typha-5bf97f656d-rd48p\" (UID: \"0614a580-9517-4496-9952-f3590f9c2a9b\") " pod="calico-system/calico-typha-5bf97f656d-rd48p" Jan 14 01:06:05.554830 kubelet[2838]: I0114 01:06:05.554237 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0614a580-9517-4496-9952-f3590f9c2a9b-tigera-ca-bundle\") pod \"calico-typha-5bf97f656d-rd48p\" (UID: \"0614a580-9517-4496-9952-f3590f9c2a9b\") " pod="calico-system/calico-typha-5bf97f656d-rd48p" Jan 14 01:06:05.554830 kubelet[2838]: I0114 01:06:05.554277 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmt4t\" (UniqueName: \"kubernetes.io/projected/0614a580-9517-4496-9952-f3590f9c2a9b-kube-api-access-dmt4t\") pod \"calico-typha-5bf97f656d-rd48p\" (UID: \"0614a580-9517-4496-9952-f3590f9c2a9b\") " pod="calico-system/calico-typha-5bf97f656d-rd48p" Jan 14 01:06:05.793013 containerd[1606]: time="2026-01-14T01:06:05.792846469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bf97f656d-rd48p,Uid:0614a580-9517-4496-9952-f3590f9c2a9b,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:05.856877 systemd[1]: Created slice kubepods-besteffort-podb6718c3f_0d79_49ef_8239_9aef71c84cab.slice - libcontainer container kubepods-besteffort-podb6718c3f_0d79_49ef_8239_9aef71c84cab.slice. Jan 14 01:06:05.872529 containerd[1606]: time="2026-01-14T01:06:05.872443476Z" level=info msg="connecting to shim 6870dc979c034f89aafac88857548767573bdb57841c9eaf8cfb24f9b1a7c465" address="unix:///run/containerd/s/46d99a6efc113b33e901a9b191a93e75f688c092e91d7dd54e827d9883131e9e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:05.951154 systemd[1]: Started cri-containerd-6870dc979c034f89aafac88857548767573bdb57841c9eaf8cfb24f9b1a7c465.scope - libcontainer container 6870dc979c034f89aafac88857548767573bdb57841c9eaf8cfb24f9b1a7c465. Jan 14 01:06:05.957267 kubelet[2838]: I0114 01:06:05.957064 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b6718c3f-0d79-49ef-8239-9aef71c84cab-cni-net-dir\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.957736 kubelet[2838]: I0114 01:06:05.957694 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b6718c3f-0d79-49ef-8239-9aef71c84cab-cni-bin-dir\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.958326 kubelet[2838]: I0114 01:06:05.958197 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6718c3f-0d79-49ef-8239-9aef71c84cab-var-lib-calico\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.959576 kubelet[2838]: I0114 01:06:05.959494 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b6718c3f-0d79-49ef-8239-9aef71c84cab-cni-log-dir\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.959789 kubelet[2838]: I0114 01:06:05.959743 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b6718c3f-0d79-49ef-8239-9aef71c84cab-flexvol-driver-host\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.959998 kubelet[2838]: I0114 01:06:05.959949 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6718c3f-0d79-49ef-8239-9aef71c84cab-lib-modules\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.960214 kubelet[2838]: I0114 01:06:05.960168 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b6718c3f-0d79-49ef-8239-9aef71c84cab-node-certs\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.960384 kubelet[2838]: I0114 01:06:05.960360 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b6718c3f-0d79-49ef-8239-9aef71c84cab-policysync\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.960526 kubelet[2838]: I0114 01:06:05.960506 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6718c3f-0d79-49ef-8239-9aef71c84cab-tigera-ca-bundle\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.960722 kubelet[2838]: I0114 01:06:05.960672 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp8sr\" (UniqueName: \"kubernetes.io/projected/b6718c3f-0d79-49ef-8239-9aef71c84cab-kube-api-access-mp8sr\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.961419 kubelet[2838]: I0114 01:06:05.960835 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b6718c3f-0d79-49ef-8239-9aef71c84cab-var-run-calico\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:05.961419 kubelet[2838]: I0114 01:06:05.960921 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6718c3f-0d79-49ef-8239-9aef71c84cab-xtables-lock\") pod \"calico-node-w6g7s\" (UID: \"b6718c3f-0d79-49ef-8239-9aef71c84cab\") " pod="calico-system/calico-node-w6g7s" Jan 14 01:06:06.000022 kubelet[2838]: E0114 01:06:05.999451 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:06.007000 audit: BPF prog-id=154 op=LOAD Jan 14 01:06:06.008000 audit: BPF prog-id=155 op=LOAD Jan 14 01:06:06.008000 audit[3298]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3286 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373064633937396330333466383961616661633838383537353438 Jan 14 01:06:06.008000 audit: BPF prog-id=155 op=UNLOAD Jan 14 01:06:06.008000 audit[3298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373064633937396330333466383961616661633838383537353438 Jan 14 01:06:06.008000 audit: BPF prog-id=156 op=LOAD Jan 14 01:06:06.008000 audit[3298]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3286 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373064633937396330333466383961616661633838383537353438 Jan 14 01:06:06.008000 audit: BPF prog-id=157 op=LOAD Jan 14 01:06:06.008000 audit[3298]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3286 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373064633937396330333466383961616661633838383537353438 Jan 14 01:06:06.008000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:06:06.008000 audit[3298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373064633937396330333466383961616661633838383537353438 Jan 14 01:06:06.008000 audit: BPF prog-id=156 op=UNLOAD Jan 14 01:06:06.008000 audit[3298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373064633937396330333466383961616661633838383537353438 Jan 14 01:06:06.008000 audit: BPF prog-id=158 op=LOAD Jan 14 01:06:06.008000 audit[3298]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3286 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373064633937396330333466383961616661633838383537353438 Jan 14 01:06:06.061989 kubelet[2838]: I0114 01:06:06.061828 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d5160b5-f0c7-4f7b-963d-652ff95653a3-kubelet-dir\") pod \"csi-node-driver-6cw62\" (UID: \"8d5160b5-f0c7-4f7b-963d-652ff95653a3\") " pod="calico-system/csi-node-driver-6cw62" Jan 14 01:06:06.061989 kubelet[2838]: I0114 01:06:06.061914 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8d5160b5-f0c7-4f7b-963d-652ff95653a3-registration-dir\") pod \"csi-node-driver-6cw62\" (UID: \"8d5160b5-f0c7-4f7b-963d-652ff95653a3\") " pod="calico-system/csi-node-driver-6cw62" Jan 14 01:06:06.061989 kubelet[2838]: I0114 01:06:06.061945 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9n8b\" (UniqueName: \"kubernetes.io/projected/8d5160b5-f0c7-4f7b-963d-652ff95653a3-kube-api-access-w9n8b\") pod \"csi-node-driver-6cw62\" (UID: \"8d5160b5-f0c7-4f7b-963d-652ff95653a3\") " pod="calico-system/csi-node-driver-6cw62" Jan 14 01:06:06.062398 kubelet[2838]: I0114 01:06:06.062040 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8d5160b5-f0c7-4f7b-963d-652ff95653a3-varrun\") pod \"csi-node-driver-6cw62\" (UID: \"8d5160b5-f0c7-4f7b-963d-652ff95653a3\") " pod="calico-system/csi-node-driver-6cw62" Jan 14 01:06:06.062398 kubelet[2838]: I0114 01:06:06.062159 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8d5160b5-f0c7-4f7b-963d-652ff95653a3-socket-dir\") pod \"csi-node-driver-6cw62\" (UID: \"8d5160b5-f0c7-4f7b-963d-652ff95653a3\") " pod="calico-system/csi-node-driver-6cw62" Jan 14 01:06:06.067278 kubelet[2838]: E0114 01:06:06.067213 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.067278 kubelet[2838]: W0114 01:06:06.067250 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.067651 kubelet[2838]: E0114 01:06:06.067297 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.068045 kubelet[2838]: E0114 01:06:06.068004 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.068045 kubelet[2838]: W0114 01:06:06.068028 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.068230 kubelet[2838]: E0114 01:06:06.068051 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.069921 kubelet[2838]: E0114 01:06:06.069850 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.070239 kubelet[2838]: W0114 01:06:06.069888 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.070239 kubelet[2838]: E0114 01:06:06.070135 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.071141 kubelet[2838]: E0114 01:06:06.070623 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.071141 kubelet[2838]: W0114 01:06:06.070646 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.071141 kubelet[2838]: E0114 01:06:06.070666 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.071141 kubelet[2838]: E0114 01:06:06.071047 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.071141 kubelet[2838]: W0114 01:06:06.071062 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.071141 kubelet[2838]: E0114 01:06:06.071108 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.071955 kubelet[2838]: E0114 01:06:06.071925 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.072326 kubelet[2838]: W0114 01:06:06.072167 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.072326 kubelet[2838]: E0114 01:06:06.072195 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.096124 kubelet[2838]: E0114 01:06:06.094381 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.096124 kubelet[2838]: W0114 01:06:06.094413 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.096124 kubelet[2838]: E0114 01:06:06.094442 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.096654 kubelet[2838]: E0114 01:06:06.096601 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.096654 kubelet[2838]: W0114 01:06:06.096629 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.096654 kubelet[2838]: E0114 01:06:06.096655 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.164965 kubelet[2838]: E0114 01:06:06.164925 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.166238 kubelet[2838]: W0114 01:06:06.164956 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.166238 kubelet[2838]: E0114 01:06:06.166062 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.167336 containerd[1606]: time="2026-01-14T01:06:06.167265650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w6g7s,Uid:b6718c3f-0d79-49ef-8239-9aef71c84cab,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:06.168210 kubelet[2838]: E0114 01:06:06.168172 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.168560 kubelet[2838]: W0114 01:06:06.168249 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.168560 kubelet[2838]: E0114 01:06:06.168278 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.169381 kubelet[2838]: E0114 01:06:06.169346 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.169664 kubelet[2838]: W0114 01:06:06.169546 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.169664 kubelet[2838]: E0114 01:06:06.169589 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.170833 containerd[1606]: time="2026-01-14T01:06:06.170730566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bf97f656d-rd48p,Uid:0614a580-9517-4496-9952-f3590f9c2a9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6870dc979c034f89aafac88857548767573bdb57841c9eaf8cfb24f9b1a7c465\"" Jan 14 01:06:06.171529 kubelet[2838]: E0114 01:06:06.171482 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.171807 kubelet[2838]: W0114 01:06:06.171781 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.172587 kubelet[2838]: E0114 01:06:06.171908 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.173438 kubelet[2838]: E0114 01:06:06.173415 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.173805 kubelet[2838]: W0114 01:06:06.173778 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.173958 kubelet[2838]: E0114 01:06:06.173936 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.177623 kubelet[2838]: E0114 01:06:06.177191 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.177623 kubelet[2838]: W0114 01:06:06.177214 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.177623 kubelet[2838]: E0114 01:06:06.177237 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.177971 kubelet[2838]: E0114 01:06:06.177564 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.177971 kubelet[2838]: W0114 01:06:06.177928 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.177971 kubelet[2838]: E0114 01:06:06.177952 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.179695 containerd[1606]: time="2026-01-14T01:06:06.179503654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:06:06.181097 kubelet[2838]: E0114 01:06:06.180772 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.181097 kubelet[2838]: W0114 01:06:06.180797 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.181097 kubelet[2838]: E0114 01:06:06.180820 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.182530 kubelet[2838]: E0114 01:06:06.182351 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.182530 kubelet[2838]: W0114 01:06:06.182372 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.182530 kubelet[2838]: E0114 01:06:06.182390 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.183273 kubelet[2838]: E0114 01:06:06.183197 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.183273 kubelet[2838]: W0114 01:06:06.183217 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.183273 kubelet[2838]: E0114 01:06:06.183236 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.185148 kubelet[2838]: E0114 01:06:06.185047 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.185625 kubelet[2838]: W0114 01:06:06.185548 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.185625 kubelet[2838]: E0114 01:06:06.185576 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.187188 kubelet[2838]: E0114 01:06:06.187033 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.187188 kubelet[2838]: W0114 01:06:06.187054 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.187188 kubelet[2838]: E0114 01:06:06.187098 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.187945 kubelet[2838]: E0114 01:06:06.187906 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.187945 kubelet[2838]: W0114 01:06:06.187925 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.188384 kubelet[2838]: E0114 01:06:06.188344 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.189751 kubelet[2838]: E0114 01:06:06.189729 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.190011 kubelet[2838]: W0114 01:06:06.189851 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.190011 kubelet[2838]: E0114 01:06:06.189876 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.190752 kubelet[2838]: E0114 01:06:06.190695 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.190752 kubelet[2838]: W0114 01:06:06.190714 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.190752 kubelet[2838]: E0114 01:06:06.190732 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.191562 kubelet[2838]: E0114 01:06:06.191399 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.191562 kubelet[2838]: W0114 01:06:06.191419 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.191562 kubelet[2838]: E0114 01:06:06.191437 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.192386 kubelet[2838]: E0114 01:06:06.192276 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.192386 kubelet[2838]: W0114 01:06:06.192298 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.192654 kubelet[2838]: E0114 01:06:06.192536 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.193572 kubelet[2838]: E0114 01:06:06.193513 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.193572 kubelet[2838]: W0114 01:06:06.193533 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.193572 kubelet[2838]: E0114 01:06:06.193551 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.194295 kubelet[2838]: E0114 01:06:06.194236 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.194295 kubelet[2838]: W0114 01:06:06.194258 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.194295 kubelet[2838]: E0114 01:06:06.194275 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.195082 kubelet[2838]: E0114 01:06:06.195010 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.195082 kubelet[2838]: W0114 01:06:06.195029 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.195082 kubelet[2838]: E0114 01:06:06.195046 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.196293 kubelet[2838]: E0114 01:06:06.196242 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.196526 kubelet[2838]: W0114 01:06:06.196263 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.196665 kubelet[2838]: E0114 01:06:06.196626 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.197907 kubelet[2838]: E0114 01:06:06.197881 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.198640 kubelet[2838]: W0114 01:06:06.197906 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.198640 kubelet[2838]: E0114 01:06:06.198219 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.201108 kubelet[2838]: E0114 01:06:06.200161 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.201108 kubelet[2838]: W0114 01:06:06.200258 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.201108 kubelet[2838]: E0114 01:06:06.200473 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.203188 kubelet[2838]: E0114 01:06:06.203160 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.203188 kubelet[2838]: W0114 01:06:06.203186 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.203320 kubelet[2838]: E0114 01:06:06.203210 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.204434 kubelet[2838]: E0114 01:06:06.204409 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.204434 kubelet[2838]: W0114 01:06:06.204431 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.204556 kubelet[2838]: E0114 01:06:06.204450 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.215899 containerd[1606]: time="2026-01-14T01:06:06.215818578Z" level=info msg="connecting to shim e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a" address="unix:///run/containerd/s/7f28236d0f3c6de51afcb4ce61c794118a8c707506feb74a749f3d2ca4966c50" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:06.235217 kubelet[2838]: E0114 01:06:06.235161 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:06.235217 kubelet[2838]: W0114 01:06:06.235211 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:06.235426 kubelet[2838]: E0114 01:06:06.235237 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:06.268014 systemd[1]: Started cri-containerd-e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a.scope - libcontainer container e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a. Jan 14 01:06:06.294000 audit: BPF prog-id=159 op=LOAD Jan 14 01:06:06.295000 audit: BPF prog-id=160 op=LOAD Jan 14 01:06:06.295000 audit[3378]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3366 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539393461336336326533313763306334313532326232333735313833 Jan 14 01:06:06.296000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:06:06.296000 audit[3378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539393461336336326533313763306334313532326232333735313833 Jan 14 01:06:06.296000 audit: BPF prog-id=161 op=LOAD Jan 14 01:06:06.296000 audit[3378]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3366 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539393461336336326533313763306334313532326232333735313833 Jan 14 01:06:06.297000 audit: BPF prog-id=162 op=LOAD Jan 14 01:06:06.297000 audit[3378]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3366 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539393461336336326533313763306334313532326232333735313833 Jan 14 01:06:06.297000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:06:06.297000 audit[3378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539393461336336326533313763306334313532326232333735313833 Jan 14 01:06:06.297000 audit: BPF prog-id=161 op=UNLOAD Jan 14 01:06:06.297000 audit[3378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539393461336336326533313763306334313532326232333735313833 Jan 14 01:06:06.297000 audit: BPF prog-id=163 op=LOAD Jan 14 01:06:06.297000 audit[3378]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3366 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539393461336336326533313763306334313532326232333735313833 Jan 14 01:06:06.334832 containerd[1606]: time="2026-01-14T01:06:06.334689216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w6g7s,Uid:b6718c3f-0d79-49ef-8239-9aef71c84cab,Namespace:calico-system,Attempt:0,} returns sandbox id \"e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a\"" Jan 14 01:06:06.551000 audit[3406]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:06.551000 audit[3406]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffcdd73370 a2=0 a3=7fffcdd7335c items=0 ppid=3020 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.551000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:06.556000 audit[3406]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:06.556000 audit[3406]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffcdd73370 a2=0 a3=0 items=0 ppid=3020 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:06.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:07.211089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508275032.mount: Deactivated successfully. Jan 14 01:06:07.527177 kubelet[2838]: E0114 01:06:07.526625 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:08.504816 containerd[1606]: time="2026-01-14T01:06:08.504741385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:08.506329 containerd[1606]: time="2026-01-14T01:06:08.506227418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 01:06:08.508277 containerd[1606]: time="2026-01-14T01:06:08.508046753Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:08.511263 containerd[1606]: time="2026-01-14T01:06:08.511156463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:08.511975 containerd[1606]: time="2026-01-14T01:06:08.511874574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.332294851s" Jan 14 01:06:08.511975 containerd[1606]: time="2026-01-14T01:06:08.511926918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:06:08.515699 containerd[1606]: time="2026-01-14T01:06:08.513810851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:06:08.538195 containerd[1606]: time="2026-01-14T01:06:08.538125886Z" level=info msg="CreateContainer within sandbox \"6870dc979c034f89aafac88857548767573bdb57841c9eaf8cfb24f9b1a7c465\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:06:08.552099 containerd[1606]: time="2026-01-14T01:06:08.550292902Z" level=info msg="Container 583bca1d5d439eccafee8d07a27a7fbcfb68f4652d3324122faefd702429a18b: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:06:08.561539 containerd[1606]: time="2026-01-14T01:06:08.561497313Z" level=info msg="CreateContainer within sandbox \"6870dc979c034f89aafac88857548767573bdb57841c9eaf8cfb24f9b1a7c465\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"583bca1d5d439eccafee8d07a27a7fbcfb68f4652d3324122faefd702429a18b\"" Jan 14 01:06:08.562403 containerd[1606]: time="2026-01-14T01:06:08.562327691Z" level=info msg="StartContainer for \"583bca1d5d439eccafee8d07a27a7fbcfb68f4652d3324122faefd702429a18b\"" Jan 14 01:06:08.564292 containerd[1606]: time="2026-01-14T01:06:08.564255200Z" level=info msg="connecting to shim 583bca1d5d439eccafee8d07a27a7fbcfb68f4652d3324122faefd702429a18b" address="unix:///run/containerd/s/46d99a6efc113b33e901a9b191a93e75f688c092e91d7dd54e827d9883131e9e" protocol=ttrpc version=3 Jan 14 01:06:08.601409 systemd[1]: Started cri-containerd-583bca1d5d439eccafee8d07a27a7fbcfb68f4652d3324122faefd702429a18b.scope - libcontainer container 583bca1d5d439eccafee8d07a27a7fbcfb68f4652d3324122faefd702429a18b. Jan 14 01:06:08.621000 audit: BPF prog-id=164 op=LOAD Jan 14 01:06:08.628126 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 14 01:06:08.628323 kernel: audit: type=1334 audit(1768352768.621:540): prog-id=164 op=LOAD Jan 14 01:06:08.621000 audit: BPF prog-id=165 op=LOAD Jan 14 01:06:08.642408 kernel: audit: type=1334 audit(1768352768.621:541): prog-id=165 op=LOAD Jan 14 01:06:08.672282 kernel: audit: type=1300 audit(1768352768.621:541): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.621000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:08.703541 kernel: audit: type=1327 audit(1768352768.621:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:08.703661 kernel: audit: type=1334 audit(1768352768.621:542): prog-id=165 op=UNLOAD Jan 14 01:06:08.621000 audit: BPF prog-id=165 op=UNLOAD Jan 14 01:06:08.621000 audit[3417]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.738692 kernel: audit: type=1300 audit(1768352768.621:542): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.740873 kernel: audit: type=1327 audit(1768352768.621:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:08.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:08.752394 containerd[1606]: time="2026-01-14T01:06:08.752301442Z" level=info msg="StartContainer for \"583bca1d5d439eccafee8d07a27a7fbcfb68f4652d3324122faefd702429a18b\" returns successfully" Jan 14 01:06:08.775949 kernel: audit: type=1334 audit(1768352768.621:543): prog-id=166 op=LOAD Jan 14 01:06:08.621000 audit: BPF prog-id=166 op=LOAD Jan 14 01:06:08.621000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.807280 kernel: audit: type=1300 audit(1768352768.621:543): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:08.621000 audit: BPF prog-id=167 op=LOAD Jan 14 01:06:08.621000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:08.621000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:06:08.621000 audit[3417]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:08.621000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:06:08.621000 audit[3417]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:08.621000 audit: BPF prog-id=168 op=LOAD Jan 14 01:06:08.837114 kernel: audit: type=1327 audit(1768352768.621:543): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:08.621000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3286 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:08.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538336263613164356434333965636361666565386430376132376137 Jan 14 01:06:09.522157 kubelet[2838]: E0114 01:06:09.521824 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:09.622943 containerd[1606]: time="2026-01-14T01:06:09.622852518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:09.624415 containerd[1606]: time="2026-01-14T01:06:09.624322539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:09.627182 containerd[1606]: time="2026-01-14T01:06:09.627123286Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:09.630441 containerd[1606]: time="2026-01-14T01:06:09.630397054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:09.632092 containerd[1606]: time="2026-01-14T01:06:09.631871020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.118014406s" Jan 14 01:06:09.632092 containerd[1606]: time="2026-01-14T01:06:09.631913892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:06:09.638613 containerd[1606]: time="2026-01-14T01:06:09.638520093Z" level=info msg="CreateContainer within sandbox \"e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:06:09.656860 containerd[1606]: time="2026-01-14T01:06:09.656291726Z" level=info msg="Container 917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:06:09.670098 containerd[1606]: time="2026-01-14T01:06:09.669993824Z" level=info msg="CreateContainer within sandbox \"e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845\"" Jan 14 01:06:09.671258 containerd[1606]: time="2026-01-14T01:06:09.671215872Z" level=info msg="StartContainer for \"917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845\"" Jan 14 01:06:09.673798 containerd[1606]: time="2026-01-14T01:06:09.673758523Z" level=info msg="connecting to shim 917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845" address="unix:///run/containerd/s/7f28236d0f3c6de51afcb4ce61c794118a8c707506feb74a749f3d2ca4966c50" protocol=ttrpc version=3 Jan 14 01:06:09.709449 systemd[1]: Started cri-containerd-917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845.scope - libcontainer container 917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845. Jan 14 01:06:09.749351 kubelet[2838]: E0114 01:06:09.749301 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.750174 kubelet[2838]: W0114 01:06:09.749711 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.750174 kubelet[2838]: E0114 01:06:09.749750 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.750925 kubelet[2838]: E0114 01:06:09.750899 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.751358 kubelet[2838]: W0114 01:06:09.751021 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.751358 kubelet[2838]: E0114 01:06:09.751049 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.751860 kubelet[2838]: E0114 01:06:09.751786 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.752085 kubelet[2838]: W0114 01:06:09.751832 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.752085 kubelet[2838]: E0114 01:06:09.751965 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.753044 kubelet[2838]: E0114 01:06:09.752911 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.753044 kubelet[2838]: W0114 01:06:09.752932 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.753044 kubelet[2838]: E0114 01:06:09.752951 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.753633 kubelet[2838]: E0114 01:06:09.753530 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.753633 kubelet[2838]: W0114 01:06:09.753550 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.753633 kubelet[2838]: E0114 01:06:09.753569 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.754317 kubelet[2838]: E0114 01:06:09.754217 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.754317 kubelet[2838]: W0114 01:06:09.754236 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.754317 kubelet[2838]: E0114 01:06:09.754254 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.754900 kubelet[2838]: E0114 01:06:09.754787 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.754900 kubelet[2838]: W0114 01:06:09.754807 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.754900 kubelet[2838]: E0114 01:06:09.754826 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.755540 kubelet[2838]: E0114 01:06:09.755421 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.755540 kubelet[2838]: W0114 01:06:09.755441 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.755540 kubelet[2838]: E0114 01:06:09.755459 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.756105 kubelet[2838]: E0114 01:06:09.756003 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.756105 kubelet[2838]: W0114 01:06:09.756023 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.756105 kubelet[2838]: E0114 01:06:09.756040 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.756712 kubelet[2838]: E0114 01:06:09.756689 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.756712 kubelet[2838]: W0114 01:06:09.756711 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.756850 kubelet[2838]: E0114 01:06:09.756729 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.757172 kubelet[2838]: E0114 01:06:09.757139 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.757172 kubelet[2838]: W0114 01:06:09.757164 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.757312 kubelet[2838]: E0114 01:06:09.757182 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.757946 kubelet[2838]: E0114 01:06:09.757628 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.757946 kubelet[2838]: W0114 01:06:09.757643 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.757946 kubelet[2838]: E0114 01:06:09.757669 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.758461 kubelet[2838]: E0114 01:06:09.758041 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.758461 kubelet[2838]: W0114 01:06:09.758275 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.758461 kubelet[2838]: E0114 01:06:09.758295 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.759402 kubelet[2838]: E0114 01:06:09.759282 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.759402 kubelet[2838]: W0114 01:06:09.759321 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.759402 kubelet[2838]: E0114 01:06:09.759340 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.760165 kubelet[2838]: E0114 01:06:09.760143 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.760354 kubelet[2838]: W0114 01:06:09.760272 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.760354 kubelet[2838]: E0114 01:06:09.760294 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.786000 audit: BPF prog-id=169 op=LOAD Jan 14 01:06:09.786000 audit[3456]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3366 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:09.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376663363863646638376639333635633564363738656135313834 Jan 14 01:06:09.786000 audit: BPF prog-id=170 op=LOAD Jan 14 01:06:09.786000 audit[3456]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3366 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:09.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376663363863646638376639333635633564363738656135313834 Jan 14 01:06:09.786000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:06:09.786000 audit[3456]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:09.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376663363863646638376639333635633564363738656135313834 Jan 14 01:06:09.786000 audit: BPF prog-id=169 op=UNLOAD Jan 14 01:06:09.786000 audit[3456]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:09.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376663363863646638376639333635633564363738656135313834 Jan 14 01:06:09.787000 audit: BPF prog-id=171 op=LOAD Jan 14 01:06:09.787000 audit[3456]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3366 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:09.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376663363863646638376639333635633564363738656135313834 Jan 14 01:06:09.817967 kubelet[2838]: E0114 01:06:09.817902 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.818544 kubelet[2838]: W0114 01:06:09.818064 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.818544 kubelet[2838]: E0114 01:06:09.818304 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.820105 kubelet[2838]: E0114 01:06:09.820056 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.820105 kubelet[2838]: W0114 01:06:09.820104 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.820259 kubelet[2838]: E0114 01:06:09.820129 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.820600 kubelet[2838]: E0114 01:06:09.820531 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.820600 kubelet[2838]: W0114 01:06:09.820550 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.820600 kubelet[2838]: E0114 01:06:09.820567 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.821263 kubelet[2838]: E0114 01:06:09.820900 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.821263 kubelet[2838]: W0114 01:06:09.820917 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.821263 kubelet[2838]: E0114 01:06:09.820933 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.821873 kubelet[2838]: E0114 01:06:09.821374 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.821873 kubelet[2838]: W0114 01:06:09.821389 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.821873 kubelet[2838]: E0114 01:06:09.821405 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.822492 kubelet[2838]: E0114 01:06:09.822193 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.822492 kubelet[2838]: W0114 01:06:09.822210 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.822492 kubelet[2838]: E0114 01:06:09.822228 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.823446 kubelet[2838]: E0114 01:06:09.823388 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.823446 kubelet[2838]: W0114 01:06:09.823407 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.823446 kubelet[2838]: E0114 01:06:09.823425 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.824164 kubelet[2838]: E0114 01:06:09.824062 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.824164 kubelet[2838]: W0114 01:06:09.824125 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.824164 kubelet[2838]: E0114 01:06:09.824143 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.824993 kubelet[2838]: E0114 01:06:09.824820 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.824993 kubelet[2838]: W0114 01:06:09.824839 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.824993 kubelet[2838]: E0114 01:06:09.824867 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.827602 kubelet[2838]: E0114 01:06:09.827537 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.828003 kubelet[2838]: W0114 01:06:09.827875 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.828003 kubelet[2838]: E0114 01:06:09.828224 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.830105 kubelet[2838]: E0114 01:06:09.829955 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.830564 kubelet[2838]: W0114 01:06:09.830472 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.830891 kubelet[2838]: E0114 01:06:09.830695 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.832315 containerd[1606]: time="2026-01-14T01:06:09.832127455Z" level=info msg="StartContainer for \"917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845\" returns successfully" Jan 14 01:06:09.833379 kubelet[2838]: E0114 01:06:09.833172 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.833379 kubelet[2838]: W0114 01:06:09.833192 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.833379 kubelet[2838]: E0114 01:06:09.833209 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.833804 kubelet[2838]: E0114 01:06:09.833645 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.833804 kubelet[2838]: W0114 01:06:09.833662 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.833804 kubelet[2838]: E0114 01:06:09.833678 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.835350 kubelet[2838]: E0114 01:06:09.835201 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.835935 kubelet[2838]: W0114 01:06:09.835792 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.835935 kubelet[2838]: E0114 01:06:09.835823 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.836686 kubelet[2838]: E0114 01:06:09.836628 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.836686 kubelet[2838]: W0114 01:06:09.836647 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.836686 kubelet[2838]: E0114 01:06:09.836664 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.837848 kubelet[2838]: E0114 01:06:09.837790 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.837848 kubelet[2838]: W0114 01:06:09.837809 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.837848 kubelet[2838]: E0114 01:06:09.837827 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.838773 kubelet[2838]: E0114 01:06:09.838748 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.838773 kubelet[2838]: W0114 01:06:09.838771 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.839211 kubelet[2838]: E0114 01:06:09.838788 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.839987 kubelet[2838]: E0114 01:06:09.839960 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:06:09.839987 kubelet[2838]: W0114 01:06:09.839983 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:06:09.840189 kubelet[2838]: E0114 01:06:09.840119 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:06:09.842733 systemd[1]: cri-containerd-917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845.scope: Deactivated successfully. Jan 14 01:06:09.846000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:06:09.847888 containerd[1606]: time="2026-01-14T01:06:09.847377840Z" level=info msg="received container exit event container_id:\"917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845\" id:\"917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845\" pid:3468 exited_at:{seconds:1768352769 nanos:846396220}" Jan 14 01:06:09.885899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917fc68cdf87f9365c5d678ea518438f7de8e535b7c7fead62c5755955636845-rootfs.mount: Deactivated successfully. Jan 14 01:06:10.730191 kubelet[2838]: I0114 01:06:10.729980 2838 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:06:10.748099 kubelet[2838]: I0114 01:06:10.747609 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bf97f656d-rd48p" podStartSLOduration=3.41263291 podStartE2EDuration="5.747582456s" podCreationTimestamp="2026-01-14 01:06:05 +0000 UTC" firstStartedPulling="2026-01-14 01:06:06.178568357 +0000 UTC m=+24.956210159" lastFinishedPulling="2026-01-14 01:06:08.5135179 +0000 UTC m=+27.291159705" observedRunningTime="2026-01-14 01:06:09.751642351 +0000 UTC m=+28.529284163" watchObservedRunningTime="2026-01-14 01:06:10.747582456 +0000 UTC m=+29.525224267" Jan 14 01:06:11.528058 kubelet[2838]: E0114 01:06:11.527986 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:11.737574 containerd[1606]: time="2026-01-14T01:06:11.737395539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:06:13.521498 kubelet[2838]: E0114 01:06:13.521416 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:14.875526 kubelet[2838]: I0114 01:06:14.875476 2838 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:06:14.989957 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 14 01:06:14.990529 kernel: audit: type=1325 audit(1768352774.966:554): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:14.966000 audit[3550]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:14.966000 audit[3550]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff86e4ff90 a2=0 a3=7fff86e4ff7c items=0 ppid=3020 pid=3550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:14.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:15.040436 kernel: audit: type=1300 audit(1768352774.966:554): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff86e4ff90 a2=0 a3=7fff86e4ff7c items=0 ppid=3020 pid=3550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:15.040584 kernel: audit: type=1327 audit(1768352774.966:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:15.023000 audit[3550]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:15.057105 kernel: audit: type=1325 audit(1768352775.023:555): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:15.057266 kernel: audit: type=1300 audit(1768352775.023:555): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff86e4ff90 a2=0 a3=7fff86e4ff7c items=0 ppid=3020 pid=3550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:15.023000 audit[3550]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff86e4ff90 a2=0 a3=7fff86e4ff7c items=0 ppid=3020 pid=3550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:15.098501 kernel: audit: type=1327 audit(1768352775.023:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:15.023000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:15.126696 containerd[1606]: time="2026-01-14T01:06:15.126493078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:15.128606 containerd[1606]: time="2026-01-14T01:06:15.128486720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:06:15.130036 containerd[1606]: time="2026-01-14T01:06:15.129941268Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:15.133345 containerd[1606]: time="2026-01-14T01:06:15.133258672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:15.134518 containerd[1606]: time="2026-01-14T01:06:15.134293459Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.396841042s" Jan 14 01:06:15.134518 containerd[1606]: time="2026-01-14T01:06:15.134398177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:06:15.139990 containerd[1606]: time="2026-01-14T01:06:15.139916680Z" level=info msg="CreateContainer within sandbox \"e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:06:15.156339 containerd[1606]: time="2026-01-14T01:06:15.156273975Z" level=info msg="Container ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:06:15.165765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3959246767.mount: Deactivated successfully. Jan 14 01:06:15.175681 containerd[1606]: time="2026-01-14T01:06:15.175578035Z" level=info msg="CreateContainer within sandbox \"e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c\"" Jan 14 01:06:15.178099 containerd[1606]: time="2026-01-14T01:06:15.177837740Z" level=info msg="StartContainer for \"ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c\"" Jan 14 01:06:15.181049 containerd[1606]: time="2026-01-14T01:06:15.180977703Z" level=info msg="connecting to shim ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c" address="unix:///run/containerd/s/7f28236d0f3c6de51afcb4ce61c794118a8c707506feb74a749f3d2ca4966c50" protocol=ttrpc version=3 Jan 14 01:06:15.217388 systemd[1]: Started cri-containerd-ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c.scope - libcontainer container ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c. Jan 14 01:06:15.288000 audit: BPF prog-id=172 op=LOAD Jan 14 01:06:15.288000 audit[3555]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3366 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:15.326876 kernel: audit: type=1334 audit(1768352775.288:556): prog-id=172 op=LOAD Jan 14 01:06:15.327312 kernel: audit: type=1300 audit(1768352775.288:556): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3366 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:15.327393 kernel: audit: type=1327 audit(1768352775.288:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261323861646661646633306165623939666534326262646163663931 Jan 14 01:06:15.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261323861646661646633306165623939666534326262646163663931 Jan 14 01:06:15.288000 audit: BPF prog-id=173 op=LOAD Jan 14 01:06:15.288000 audit[3555]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3366 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:15.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261323861646661646633306165623939666534326262646163663931 Jan 14 01:06:15.288000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:06:15.364162 kernel: audit: type=1334 audit(1768352775.288:557): prog-id=173 op=LOAD Jan 14 01:06:15.288000 audit[3555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:15.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261323861646661646633306165623939666534326262646163663931 Jan 14 01:06:15.288000 audit: BPF prog-id=172 op=UNLOAD Jan 14 01:06:15.288000 audit[3555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:15.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261323861646661646633306165623939666534326262646163663931 Jan 14 01:06:15.288000 audit: BPF prog-id=174 op=LOAD Jan 14 01:06:15.288000 audit[3555]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3366 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:15.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261323861646661646633306165623939666534326262646163663931 Jan 14 01:06:15.385905 containerd[1606]: time="2026-01-14T01:06:15.384515478Z" level=info msg="StartContainer for \"ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c\" returns successfully" Jan 14 01:06:15.521957 kubelet[2838]: E0114 01:06:15.521143 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:16.405633 systemd[1]: cri-containerd-ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c.scope: Deactivated successfully. Jan 14 01:06:16.406662 systemd[1]: cri-containerd-ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c.scope: Consumed 665ms CPU time, 193.1M memory peak, 171.3M written to disk. Jan 14 01:06:16.408000 audit: BPF prog-id=174 op=UNLOAD Jan 14 01:06:16.410086 containerd[1606]: time="2026-01-14T01:06:16.409991674Z" level=info msg="received container exit event container_id:\"ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c\" id:\"ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c\" pid:3568 exited_at:{seconds:1768352776 nanos:409604012}" Jan 14 01:06:16.444218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba28adfadf30aeb99fe42bbdacf91f4b911cf25bf65a62e43a344026a9cf978c-rootfs.mount: Deactivated successfully. Jan 14 01:06:16.509172 kubelet[2838]: I0114 01:06:16.508058 2838 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:06:16.719155 systemd[1]: Created slice kubepods-burstable-pod8fab71db_e780_4c15_9007_1a003926cb63.slice - libcontainer container kubepods-burstable-pod8fab71db_e780_4c15_9007_1a003926cb63.slice. Jan 14 01:06:16.807295 kubelet[2838]: I0114 01:06:16.807232 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphq9\" (UniqueName: \"kubernetes.io/projected/8fab71db-e780-4c15-9007-1a003926cb63-kube-api-access-sphq9\") pod \"coredns-674b8bbfcf-4qzsv\" (UID: \"8fab71db-e780-4c15-9007-1a003926cb63\") " pod="kube-system/coredns-674b8bbfcf-4qzsv" Jan 14 01:06:16.924104 kubelet[2838]: I0114 01:06:16.807315 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fab71db-e780-4c15-9007-1a003926cb63-config-volume\") pod \"coredns-674b8bbfcf-4qzsv\" (UID: \"8fab71db-e780-4c15-9007-1a003926cb63\") " pod="kube-system/coredns-674b8bbfcf-4qzsv" Jan 14 01:06:16.958769 systemd[1]: Created slice kubepods-besteffort-pod8938f0a3_97c1_409e_8512_d0019d691e13.slice - libcontainer container kubepods-besteffort-pod8938f0a3_97c1_409e_8512_d0019d691e13.slice. Jan 14 01:06:17.013179 kubelet[2838]: I0114 01:06:17.011538 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8938f0a3-97c1-409e-8512-d0019d691e13-whisker-ca-bundle\") pod \"whisker-578dbbcb9d-f2xjm\" (UID: \"8938f0a3-97c1-409e-8512-d0019d691e13\") " pod="calico-system/whisker-578dbbcb9d-f2xjm" Jan 14 01:06:17.013179 kubelet[2838]: I0114 01:06:17.012514 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/addca0d5-666f-4204-b79f-467069ad43cd-config-volume\") pod \"coredns-674b8bbfcf-jgpk9\" (UID: \"addca0d5-666f-4204-b79f-467069ad43cd\") " pod="kube-system/coredns-674b8bbfcf-jgpk9" Jan 14 01:06:17.016477 kubelet[2838]: I0114 01:06:17.013343 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5vq2\" (UniqueName: \"kubernetes.io/projected/84272add-ebcc-42f7-bffa-247234ecb849-kube-api-access-n5vq2\") pod \"calico-kube-controllers-79fc68bf94-hv66g\" (UID: \"84272add-ebcc-42f7-bffa-247234ecb849\") " pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" Jan 14 01:06:17.016477 kubelet[2838]: I0114 01:06:17.013487 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bbm7\" (UniqueName: \"kubernetes.io/projected/addca0d5-666f-4204-b79f-467069ad43cd-kube-api-access-6bbm7\") pod \"coredns-674b8bbfcf-jgpk9\" (UID: \"addca0d5-666f-4204-b79f-467069ad43cd\") " pod="kube-system/coredns-674b8bbfcf-jgpk9" Jan 14 01:06:17.016477 kubelet[2838]: I0114 01:06:17.013640 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8938f0a3-97c1-409e-8512-d0019d691e13-whisker-backend-key-pair\") pod \"whisker-578dbbcb9d-f2xjm\" (UID: \"8938f0a3-97c1-409e-8512-d0019d691e13\") " pod="calico-system/whisker-578dbbcb9d-f2xjm" Jan 14 01:06:17.016477 kubelet[2838]: I0114 01:06:17.013673 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxxf5\" (UniqueName: \"kubernetes.io/projected/8938f0a3-97c1-409e-8512-d0019d691e13-kube-api-access-hxxf5\") pod \"whisker-578dbbcb9d-f2xjm\" (UID: \"8938f0a3-97c1-409e-8512-d0019d691e13\") " pod="calico-system/whisker-578dbbcb9d-f2xjm" Jan 14 01:06:17.016477 kubelet[2838]: I0114 01:06:17.013862 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84272add-ebcc-42f7-bffa-247234ecb849-tigera-ca-bundle\") pod \"calico-kube-controllers-79fc68bf94-hv66g\" (UID: \"84272add-ebcc-42f7-bffa-247234ecb849\") " pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" Jan 14 01:06:17.027614 containerd[1606]: time="2026-01-14T01:06:17.027525064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4qzsv,Uid:8fab71db-e780-4c15-9007-1a003926cb63,Namespace:kube-system,Attempt:0,}" Jan 14 01:06:17.036038 systemd[1]: Created slice kubepods-burstable-podaddca0d5_666f_4204_b79f_467069ad43cd.slice - libcontainer container kubepods-burstable-podaddca0d5_666f_4204_b79f_467069ad43cd.slice. Jan 14 01:06:17.084550 systemd[1]: Created slice kubepods-besteffort-podb6e624a7_3dff_47e8_aa9a_152cfa985108.slice - libcontainer container kubepods-besteffort-podb6e624a7_3dff_47e8_aa9a_152cfa985108.slice. Jan 14 01:06:17.112190 systemd[1]: Created slice kubepods-besteffort-pod84272add_ebcc_42f7_bffa_247234ecb849.slice - libcontainer container kubepods-besteffort-pod84272add_ebcc_42f7_bffa_247234ecb849.slice. Jan 14 01:06:17.114152 kubelet[2838]: I0114 01:06:17.114086 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c1341b7-068c-4ccd-ba99-cbf173a0144f-goldmane-ca-bundle\") pod \"goldmane-666569f655-5j8fj\" (UID: \"5c1341b7-068c-4ccd-ba99-cbf173a0144f\") " pod="calico-system/goldmane-666569f655-5j8fj" Jan 14 01:06:17.114152 kubelet[2838]: I0114 01:06:17.114140 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5c1341b7-068c-4ccd-ba99-cbf173a0144f-goldmane-key-pair\") pod \"goldmane-666569f655-5j8fj\" (UID: \"5c1341b7-068c-4ccd-ba99-cbf173a0144f\") " pod="calico-system/goldmane-666569f655-5j8fj" Jan 14 01:06:17.114934 kubelet[2838]: I0114 01:06:17.114223 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1341b7-068c-4ccd-ba99-cbf173a0144f-config\") pod \"goldmane-666569f655-5j8fj\" (UID: \"5c1341b7-068c-4ccd-ba99-cbf173a0144f\") " pod="calico-system/goldmane-666569f655-5j8fj" Jan 14 01:06:17.114934 kubelet[2838]: I0114 01:06:17.114266 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t49pg\" (UniqueName: \"kubernetes.io/projected/5c1341b7-068c-4ccd-ba99-cbf173a0144f-kube-api-access-t49pg\") pod \"goldmane-666569f655-5j8fj\" (UID: \"5c1341b7-068c-4ccd-ba99-cbf173a0144f\") " pod="calico-system/goldmane-666569f655-5j8fj" Jan 14 01:06:17.114934 kubelet[2838]: I0114 01:06:17.114352 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6e624a7-3dff-47e8-aa9a-152cfa985108-calico-apiserver-certs\") pod \"calico-apiserver-599f9958f6-jwfn5\" (UID: \"b6e624a7-3dff-47e8-aa9a-152cfa985108\") " pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" Jan 14 01:06:17.114934 kubelet[2838]: I0114 01:06:17.114388 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4615652e-e23f-4213-b02a-57b7f1d1c9f0-calico-apiserver-certs\") pod \"calico-apiserver-599f9958f6-q8lcg\" (UID: \"4615652e-e23f-4213-b02a-57b7f1d1c9f0\") " pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" Jan 14 01:06:17.114934 kubelet[2838]: I0114 01:06:17.114427 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfph8\" (UniqueName: \"kubernetes.io/projected/b6e624a7-3dff-47e8-aa9a-152cfa985108-kube-api-access-cfph8\") pod \"calico-apiserver-599f9958f6-jwfn5\" (UID: \"b6e624a7-3dff-47e8-aa9a-152cfa985108\") " pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" Jan 14 01:06:17.116017 kubelet[2838]: I0114 01:06:17.114453 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnb4s\" (UniqueName: \"kubernetes.io/projected/4615652e-e23f-4213-b02a-57b7f1d1c9f0-kube-api-access-dnb4s\") pod \"calico-apiserver-599f9958f6-q8lcg\" (UID: \"4615652e-e23f-4213-b02a-57b7f1d1c9f0\") " pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" Jan 14 01:06:17.139413 systemd[1]: Created slice kubepods-besteffort-pod5c1341b7_068c_4ccd_ba99_cbf173a0144f.slice - libcontainer container kubepods-besteffort-pod5c1341b7_068c_4ccd_ba99_cbf173a0144f.slice. Jan 14 01:06:17.185403 systemd[1]: Created slice kubepods-besteffort-pod4615652e_e23f_4213_b02a_57b7f1d1c9f0.slice - libcontainer container kubepods-besteffort-pod4615652e_e23f_4213_b02a_57b7f1d1c9f0.slice. Jan 14 01:06:17.257615 containerd[1606]: time="2026-01-14T01:06:17.257553609Z" level=error msg="Failed to destroy network for sandbox \"cfe790671f2f0ae480766284de33b5ea084cf3ebbe4aa98786cf58ddb8d788e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.261268 containerd[1606]: time="2026-01-14T01:06:17.261201407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4qzsv,Uid:8fab71db-e780-4c15-9007-1a003926cb63,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe790671f2f0ae480766284de33b5ea084cf3ebbe4aa98786cf58ddb8d788e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.261666 kubelet[2838]: E0114 01:06:17.261537 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe790671f2f0ae480766284de33b5ea084cf3ebbe4aa98786cf58ddb8d788e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.261666 kubelet[2838]: E0114 01:06:17.261617 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe790671f2f0ae480766284de33b5ea084cf3ebbe4aa98786cf58ddb8d788e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4qzsv" Jan 14 01:06:17.261666 kubelet[2838]: E0114 01:06:17.261649 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe790671f2f0ae480766284de33b5ea084cf3ebbe4aa98786cf58ddb8d788e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4qzsv" Jan 14 01:06:17.261950 kubelet[2838]: E0114 01:06:17.261719 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4qzsv_kube-system(8fab71db-e780-4c15-9007-1a003926cb63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4qzsv_kube-system(8fab71db-e780-4c15-9007-1a003926cb63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfe790671f2f0ae480766284de33b5ea084cf3ebbe4aa98786cf58ddb8d788e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4qzsv" podUID="8fab71db-e780-4c15-9007-1a003926cb63" Jan 14 01:06:17.269204 containerd[1606]: time="2026-01-14T01:06:17.269063810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-578dbbcb9d-f2xjm,Uid:8938f0a3-97c1-409e-8512-d0019d691e13,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:17.339831 containerd[1606]: time="2026-01-14T01:06:17.339766037Z" level=error msg="Failed to destroy network for sandbox \"c3f2bbd8835e39fb08fc9ee9690f9123d45e2521500473755bc03df98efb8772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.343191 containerd[1606]: time="2026-01-14T01:06:17.343115343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-578dbbcb9d-f2xjm,Uid:8938f0a3-97c1-409e-8512-d0019d691e13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f2bbd8835e39fb08fc9ee9690f9123d45e2521500473755bc03df98efb8772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.343535 kubelet[2838]: E0114 01:06:17.343477 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f2bbd8835e39fb08fc9ee9690f9123d45e2521500473755bc03df98efb8772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.343729 kubelet[2838]: E0114 01:06:17.343567 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f2bbd8835e39fb08fc9ee9690f9123d45e2521500473755bc03df98efb8772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-578dbbcb9d-f2xjm" Jan 14 01:06:17.343729 kubelet[2838]: E0114 01:06:17.343600 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f2bbd8835e39fb08fc9ee9690f9123d45e2521500473755bc03df98efb8772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-578dbbcb9d-f2xjm" Jan 14 01:06:17.343964 kubelet[2838]: E0114 01:06:17.343701 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-578dbbcb9d-f2xjm_calico-system(8938f0a3-97c1-409e-8512-d0019d691e13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-578dbbcb9d-f2xjm_calico-system(8938f0a3-97c1-409e-8512-d0019d691e13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3f2bbd8835e39fb08fc9ee9690f9123d45e2521500473755bc03df98efb8772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-578dbbcb9d-f2xjm" podUID="8938f0a3-97c1-409e-8512-d0019d691e13" Jan 14 01:06:17.367366 containerd[1606]: time="2026-01-14T01:06:17.367300221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jgpk9,Uid:addca0d5-666f-4204-b79f-467069ad43cd,Namespace:kube-system,Attempt:0,}" Jan 14 01:06:17.397795 containerd[1606]: time="2026-01-14T01:06:17.397740273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599f9958f6-jwfn5,Uid:b6e624a7-3dff-47e8-aa9a-152cfa985108,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:06:17.430664 containerd[1606]: time="2026-01-14T01:06:17.430573360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79fc68bf94-hv66g,Uid:84272add-ebcc-42f7-bffa-247234ecb849,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:17.493945 containerd[1606]: time="2026-01-14T01:06:17.493888631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5j8fj,Uid:5c1341b7-068c-4ccd-ba99-cbf173a0144f,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:17.495619 systemd[1]: run-netns-cni\x2dfb30a184\x2d27cd\x2d2a05\x2d6f58\x2d0b93a49c211b.mount: Deactivated successfully. Jan 14 01:06:17.507237 containerd[1606]: time="2026-01-14T01:06:17.507187799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599f9958f6-q8lcg,Uid:4615652e-e23f-4213-b02a-57b7f1d1c9f0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:06:17.549003 systemd[1]: Created slice kubepods-besteffort-pod8d5160b5_f0c7_4f7b_963d_652ff95653a3.slice - libcontainer container kubepods-besteffort-pod8d5160b5_f0c7_4f7b_963d_652ff95653a3.slice. Jan 14 01:06:17.554507 containerd[1606]: time="2026-01-14T01:06:17.554393217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6cw62,Uid:8d5160b5-f0c7-4f7b-963d-652ff95653a3,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:17.563399 containerd[1606]: time="2026-01-14T01:06:17.563322500Z" level=error msg="Failed to destroy network for sandbox \"644e67a9fc592a4dcd18bbe3112e27bb6e10d675ac195d768029e502d5bc5c49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.572301 systemd[1]: run-netns-cni\x2d16499871\x2d5151\x2d6f09\x2dfdf4\x2d1af96caedd4e.mount: Deactivated successfully. Jan 14 01:06:17.579767 containerd[1606]: time="2026-01-14T01:06:17.579686794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jgpk9,Uid:addca0d5-666f-4204-b79f-467069ad43cd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"644e67a9fc592a4dcd18bbe3112e27bb6e10d675ac195d768029e502d5bc5c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.580661 kubelet[2838]: E0114 01:06:17.580573 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644e67a9fc592a4dcd18bbe3112e27bb6e10d675ac195d768029e502d5bc5c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.581193 kubelet[2838]: E0114 01:06:17.580705 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644e67a9fc592a4dcd18bbe3112e27bb6e10d675ac195d768029e502d5bc5c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jgpk9" Jan 14 01:06:17.581193 kubelet[2838]: E0114 01:06:17.580744 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644e67a9fc592a4dcd18bbe3112e27bb6e10d675ac195d768029e502d5bc5c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jgpk9" Jan 14 01:06:17.581193 kubelet[2838]: E0114 01:06:17.580972 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jgpk9_kube-system(addca0d5-666f-4204-b79f-467069ad43cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jgpk9_kube-system(addca0d5-666f-4204-b79f-467069ad43cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"644e67a9fc592a4dcd18bbe3112e27bb6e10d675ac195d768029e502d5bc5c49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jgpk9" podUID="addca0d5-666f-4204-b79f-467069ad43cd" Jan 14 01:06:17.663521 containerd[1606]: time="2026-01-14T01:06:17.663383634Z" level=error msg="Failed to destroy network for sandbox \"f97d86f8eba26b1ffa513bde9b0d7fc2db766a4060050ff32b53fc9b7e02e8ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.671091 containerd[1606]: time="2026-01-14T01:06:17.670982219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599f9958f6-jwfn5,Uid:b6e624a7-3dff-47e8-aa9a-152cfa985108,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97d86f8eba26b1ffa513bde9b0d7fc2db766a4060050ff32b53fc9b7e02e8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.672210 kubelet[2838]: E0114 01:06:17.671939 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97d86f8eba26b1ffa513bde9b0d7fc2db766a4060050ff32b53fc9b7e02e8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.672210 kubelet[2838]: E0114 01:06:17.672038 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97d86f8eba26b1ffa513bde9b0d7fc2db766a4060050ff32b53fc9b7e02e8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" Jan 14 01:06:17.672210 kubelet[2838]: E0114 01:06:17.672129 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97d86f8eba26b1ffa513bde9b0d7fc2db766a4060050ff32b53fc9b7e02e8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" Jan 14 01:06:17.672485 kubelet[2838]: E0114 01:06:17.672240 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-599f9958f6-jwfn5_calico-apiserver(b6e624a7-3dff-47e8-aa9a-152cfa985108)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-599f9958f6-jwfn5_calico-apiserver(b6e624a7-3dff-47e8-aa9a-152cfa985108)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f97d86f8eba26b1ffa513bde9b0d7fc2db766a4060050ff32b53fc9b7e02e8ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:06:17.728095 containerd[1606]: time="2026-01-14T01:06:17.727981537Z" level=error msg="Failed to destroy network for sandbox \"ada739f419fd8ca9d6ec8fe11dc12bde2a707c2854ef10910e7985755d47c2db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.733604 containerd[1606]: time="2026-01-14T01:06:17.732352699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79fc68bf94-hv66g,Uid:84272add-ebcc-42f7-bffa-247234ecb849,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ada739f419fd8ca9d6ec8fe11dc12bde2a707c2854ef10910e7985755d47c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.733844 kubelet[2838]: E0114 01:06:17.732677 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ada739f419fd8ca9d6ec8fe11dc12bde2a707c2854ef10910e7985755d47c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.733844 kubelet[2838]: E0114 01:06:17.732773 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ada739f419fd8ca9d6ec8fe11dc12bde2a707c2854ef10910e7985755d47c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" Jan 14 01:06:17.733844 kubelet[2838]: E0114 01:06:17.732808 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ada739f419fd8ca9d6ec8fe11dc12bde2a707c2854ef10910e7985755d47c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" Jan 14 01:06:17.734033 kubelet[2838]: E0114 01:06:17.732885 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79fc68bf94-hv66g_calico-system(84272add-ebcc-42f7-bffa-247234ecb849)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79fc68bf94-hv66g_calico-system(84272add-ebcc-42f7-bffa-247234ecb849)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ada739f419fd8ca9d6ec8fe11dc12bde2a707c2854ef10910e7985755d47c2db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:06:17.771017 containerd[1606]: time="2026-01-14T01:06:17.770853156Z" level=error msg="Failed to destroy network for sandbox \"f9fb128e6fb8f34f87f0452f467c6d170c5045d2e36923ff2163b09c4a260a04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.779133 containerd[1606]: time="2026-01-14T01:06:17.778622049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5j8fj,Uid:5c1341b7-068c-4ccd-ba99-cbf173a0144f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9fb128e6fb8f34f87f0452f467c6d170c5045d2e36923ff2163b09c4a260a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.782210 containerd[1606]: time="2026-01-14T01:06:17.778987373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:06:17.786558 kubelet[2838]: E0114 01:06:17.786241 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9fb128e6fb8f34f87f0452f467c6d170c5045d2e36923ff2163b09c4a260a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.786558 kubelet[2838]: E0114 01:06:17.786359 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9fb128e6fb8f34f87f0452f467c6d170c5045d2e36923ff2163b09c4a260a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5j8fj" Jan 14 01:06:17.786558 kubelet[2838]: E0114 01:06:17.786408 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9fb128e6fb8f34f87f0452f467c6d170c5045d2e36923ff2163b09c4a260a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5j8fj" Jan 14 01:06:17.786849 kubelet[2838]: E0114 01:06:17.786553 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5j8fj_calico-system(5c1341b7-068c-4ccd-ba99-cbf173a0144f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5j8fj_calico-system(5c1341b7-068c-4ccd-ba99-cbf173a0144f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9fb128e6fb8f34f87f0452f467c6d170c5045d2e36923ff2163b09c4a260a04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:06:17.830884 containerd[1606]: time="2026-01-14T01:06:17.830687524Z" level=error msg="Failed to destroy network for sandbox \"aee6a6157a7171c7c1cffa681b5b36891d08107d92788e3dcf505b7206954d47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.837626 containerd[1606]: time="2026-01-14T01:06:17.835257908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599f9958f6-q8lcg,Uid:4615652e-e23f-4213-b02a-57b7f1d1c9f0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee6a6157a7171c7c1cffa681b5b36891d08107d92788e3dcf505b7206954d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.837891 kubelet[2838]: E0114 01:06:17.835591 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee6a6157a7171c7c1cffa681b5b36891d08107d92788e3dcf505b7206954d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.837891 kubelet[2838]: E0114 01:06:17.835689 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee6a6157a7171c7c1cffa681b5b36891d08107d92788e3dcf505b7206954d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" Jan 14 01:06:17.837891 kubelet[2838]: E0114 01:06:17.835721 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee6a6157a7171c7c1cffa681b5b36891d08107d92788e3dcf505b7206954d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" Jan 14 01:06:17.838119 kubelet[2838]: E0114 01:06:17.835790 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-599f9958f6-q8lcg_calico-apiserver(4615652e-e23f-4213-b02a-57b7f1d1c9f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-599f9958f6-q8lcg_calico-apiserver(4615652e-e23f-4213-b02a-57b7f1d1c9f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aee6a6157a7171c7c1cffa681b5b36891d08107d92788e3dcf505b7206954d47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:06:17.847935 containerd[1606]: time="2026-01-14T01:06:17.847864330Z" level=error msg="Failed to destroy network for sandbox \"aea33718b6251ab6b1d3f6eee106eef5d95ebf197dc9a1694bc506b30652140e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.851642 containerd[1606]: time="2026-01-14T01:06:17.851555510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6cw62,Uid:8d5160b5-f0c7-4f7b-963d-652ff95653a3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea33718b6251ab6b1d3f6eee106eef5d95ebf197dc9a1694bc506b30652140e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.852049 kubelet[2838]: E0114 01:06:17.851985 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea33718b6251ab6b1d3f6eee106eef5d95ebf197dc9a1694bc506b30652140e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:06:17.852681 kubelet[2838]: E0114 01:06:17.852611 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea33718b6251ab6b1d3f6eee106eef5d95ebf197dc9a1694bc506b30652140e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6cw62" Jan 14 01:06:17.852807 kubelet[2838]: E0114 01:06:17.852682 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea33718b6251ab6b1d3f6eee106eef5d95ebf197dc9a1694bc506b30652140e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6cw62" Jan 14 01:06:17.852874 kubelet[2838]: E0114 01:06:17.852830 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6cw62_calico-system(8d5160b5-f0c7-4f7b-963d-652ff95653a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6cw62_calico-system(8d5160b5-f0c7-4f7b-963d-652ff95653a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aea33718b6251ab6b1d3f6eee106eef5d95ebf197dc9a1694bc506b30652140e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:18.444585 systemd[1]: run-netns-cni\x2d323c65e6\x2d3fc4\x2d5178\x2d8d78\x2d66c9714721e5.mount: Deactivated successfully. Jan 14 01:06:18.444733 systemd[1]: run-netns-cni\x2d85bf5813\x2da485\x2dc261\x2dbfcf\x2d7e8ff5399bb9.mount: Deactivated successfully. Jan 14 01:06:18.444834 systemd[1]: run-netns-cni\x2d0fda4bb9\x2d0ec2\x2da3ad\x2dcb1b\x2de661a364ea87.mount: Deactivated successfully. Jan 14 01:06:18.445191 systemd[1]: run-netns-cni\x2d4a74dbda\x2d5a7c\x2dbac4\x2df107\x2d1751e89d0c48.mount: Deactivated successfully. Jan 14 01:06:18.445333 systemd[1]: run-netns-cni\x2da27052f2\x2d6587\x2dbee4\x2da3b7\x2df136a8920a2c.mount: Deactivated successfully. Jan 14 01:06:24.530686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910492678.mount: Deactivated successfully. Jan 14 01:06:24.562599 containerd[1606]: time="2026-01-14T01:06:24.562524001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:24.563705 containerd[1606]: time="2026-01-14T01:06:24.563651860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:06:24.565106 containerd[1606]: time="2026-01-14T01:06:24.565010540Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:24.567527 containerd[1606]: time="2026-01-14T01:06:24.567463673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:24.568346 containerd[1606]: time="2026-01-14T01:06:24.568301304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.785996673s" Jan 14 01:06:24.568591 containerd[1606]: time="2026-01-14T01:06:24.568352759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:06:24.594054 containerd[1606]: time="2026-01-14T01:06:24.593966367Z" level=info msg="CreateContainer within sandbox \"e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:06:24.612110 containerd[1606]: time="2026-01-14T01:06:24.611331330Z" level=info msg="Container 36283183ac85927db6e5e362f9bc5493d1efb32935343e19f668129f281a0047: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:06:24.625464 containerd[1606]: time="2026-01-14T01:06:24.625397267Z" level=info msg="CreateContainer within sandbox \"e994a3c62e317c0c41522b23751835d2fecacd5422ec6cb895f2b1eac982af3a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"36283183ac85927db6e5e362f9bc5493d1efb32935343e19f668129f281a0047\"" Jan 14 01:06:24.627143 containerd[1606]: time="2026-01-14T01:06:24.626159075Z" level=info msg="StartContainer for \"36283183ac85927db6e5e362f9bc5493d1efb32935343e19f668129f281a0047\"" Jan 14 01:06:24.639481 containerd[1606]: time="2026-01-14T01:06:24.639385867Z" level=info msg="connecting to shim 36283183ac85927db6e5e362f9bc5493d1efb32935343e19f668129f281a0047" address="unix:///run/containerd/s/7f28236d0f3c6de51afcb4ce61c794118a8c707506feb74a749f3d2ca4966c50" protocol=ttrpc version=3 Jan 14 01:06:24.672366 systemd[1]: Started cri-containerd-36283183ac85927db6e5e362f9bc5493d1efb32935343e19f668129f281a0047.scope - libcontainer container 36283183ac85927db6e5e362f9bc5493d1efb32935343e19f668129f281a0047. Jan 14 01:06:24.758000 audit: BPF prog-id=175 op=LOAD Jan 14 01:06:24.766139 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:06:24.766292 kernel: audit: type=1334 audit(1768352784.758:562): prog-id=175 op=LOAD Jan 14 01:06:24.758000 audit[3824]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3366 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:24.802366 kernel: audit: type=1300 audit(1768352784.758:562): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3366 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:24.802531 kernel: audit: type=1327 audit(1768352784.758:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323833313833616338353932376462366535653336326639626335 Jan 14 01:06:24.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323833313833616338353932376462366535653336326639626335 Jan 14 01:06:24.758000 audit: BPF prog-id=176 op=LOAD Jan 14 01:06:24.838680 kernel: audit: type=1334 audit(1768352784.758:563): prog-id=176 op=LOAD Jan 14 01:06:24.758000 audit[3824]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3366 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:24.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323833313833616338353932376462366535653336326639626335 Jan 14 01:06:24.898565 kernel: audit: type=1300 audit(1768352784.758:563): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3366 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:24.900279 kernel: audit: type=1327 audit(1768352784.758:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323833313833616338353932376462366535653336326639626335 Jan 14 01:06:24.900337 kernel: audit: type=1334 audit(1768352784.758:564): prog-id=176 op=UNLOAD Jan 14 01:06:24.758000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:06:24.906130 kernel: audit: type=1300 audit(1768352784.758:564): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:24.758000 audit[3824]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:24.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323833313833616338353932376462366535653336326639626335 Jan 14 01:06:24.964104 kernel: audit: type=1327 audit(1768352784.758:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323833313833616338353932376462366535653336326639626335 Jan 14 01:06:24.758000 audit: BPF prog-id=175 op=UNLOAD Jan 14 01:06:24.758000 audit[3824]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:24.972095 kernel: audit: type=1334 audit(1768352784.758:565): prog-id=175 op=UNLOAD Jan 14 01:06:24.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323833313833616338353932376462366535653336326639626335 Jan 14 01:06:24.758000 audit: BPF prog-id=177 op=LOAD Jan 14 01:06:24.758000 audit[3824]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3366 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:24.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336323833313833616338353932376462366535653336326639626335 Jan 14 01:06:24.978857 containerd[1606]: time="2026-01-14T01:06:24.978802292Z" level=info msg="StartContainer for \"36283183ac85927db6e5e362f9bc5493d1efb32935343e19f668129f281a0047\" returns successfully" Jan 14 01:06:25.058744 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:06:25.058975 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:06:25.278580 kubelet[2838]: I0114 01:06:25.278520 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8938f0a3-97c1-409e-8512-d0019d691e13-whisker-backend-key-pair\") pod \"8938f0a3-97c1-409e-8512-d0019d691e13\" (UID: \"8938f0a3-97c1-409e-8512-d0019d691e13\") " Jan 14 01:06:25.278580 kubelet[2838]: I0114 01:06:25.278582 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxxf5\" (UniqueName: \"kubernetes.io/projected/8938f0a3-97c1-409e-8512-d0019d691e13-kube-api-access-hxxf5\") pod \"8938f0a3-97c1-409e-8512-d0019d691e13\" (UID: \"8938f0a3-97c1-409e-8512-d0019d691e13\") " Jan 14 01:06:25.279267 kubelet[2838]: I0114 01:06:25.278625 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8938f0a3-97c1-409e-8512-d0019d691e13-whisker-ca-bundle\") pod \"8938f0a3-97c1-409e-8512-d0019d691e13\" (UID: \"8938f0a3-97c1-409e-8512-d0019d691e13\") " Jan 14 01:06:25.287462 kubelet[2838]: I0114 01:06:25.287387 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8938f0a3-97c1-409e-8512-d0019d691e13-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8938f0a3-97c1-409e-8512-d0019d691e13" (UID: "8938f0a3-97c1-409e-8512-d0019d691e13"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:06:25.291063 kubelet[2838]: I0114 01:06:25.290636 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8938f0a3-97c1-409e-8512-d0019d691e13-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8938f0a3-97c1-409e-8512-d0019d691e13" (UID: "8938f0a3-97c1-409e-8512-d0019d691e13"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:06:25.300503 kubelet[2838]: I0114 01:06:25.298223 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8938f0a3-97c1-409e-8512-d0019d691e13-kube-api-access-hxxf5" (OuterVolumeSpecName: "kube-api-access-hxxf5") pod "8938f0a3-97c1-409e-8512-d0019d691e13" (UID: "8938f0a3-97c1-409e-8512-d0019d691e13"). InnerVolumeSpecName "kube-api-access-hxxf5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:06:25.379507 kubelet[2838]: I0114 01:06:25.379395 2838 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8938f0a3-97c1-409e-8512-d0019d691e13-whisker-ca-bundle\") on node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" DevicePath \"\"" Jan 14 01:06:25.379507 kubelet[2838]: I0114 01:06:25.379450 2838 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8938f0a3-97c1-409e-8512-d0019d691e13-whisker-backend-key-pair\") on node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" DevicePath \"\"" Jan 14 01:06:25.379507 kubelet[2838]: I0114 01:06:25.379470 2838 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hxxf5\" (UniqueName: \"kubernetes.io/projected/8938f0a3-97c1-409e-8512-d0019d691e13-kube-api-access-hxxf5\") on node \"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3\" DevicePath \"\"" Jan 14 01:06:25.530250 systemd[1]: var-lib-kubelet-pods-8938f0a3\x2d97c1\x2d409e\x2d8512\x2dd0019d691e13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxxf5.mount: Deactivated successfully. Jan 14 01:06:25.530523 systemd[1]: var-lib-kubelet-pods-8938f0a3\x2d97c1\x2d409e\x2d8512\x2dd0019d691e13-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:06:25.537406 systemd[1]: Removed slice kubepods-besteffort-pod8938f0a3_97c1_409e_8512_d0019d691e13.slice - libcontainer container kubepods-besteffort-pod8938f0a3_97c1_409e_8512_d0019d691e13.slice. Jan 14 01:06:25.839125 kubelet[2838]: I0114 01:06:25.838289 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w6g7s" podStartSLOduration=2.611261438 podStartE2EDuration="20.838245068s" podCreationTimestamp="2026-01-14 01:06:05 +0000 UTC" firstStartedPulling="2026-01-14 01:06:06.342721614 +0000 UTC m=+25.120363417" lastFinishedPulling="2026-01-14 01:06:24.569705237 +0000 UTC m=+43.347347047" observedRunningTime="2026-01-14 01:06:25.835789035 +0000 UTC m=+44.613430874" watchObservedRunningTime="2026-01-14 01:06:25.838245068 +0000 UTC m=+44.615886882" Jan 14 01:06:25.950309 systemd[1]: Created slice kubepods-besteffort-pod1e67631e_c93b_4515_bed4_6b4bf787eb57.slice - libcontainer container kubepods-besteffort-pod1e67631e_c93b_4515_bed4_6b4bf787eb57.slice. Jan 14 01:06:25.987424 kubelet[2838]: I0114 01:06:25.987364 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4zs6\" (UniqueName: \"kubernetes.io/projected/1e67631e-c93b-4515-bed4-6b4bf787eb57-kube-api-access-x4zs6\") pod \"whisker-5dd9b68968-72tft\" (UID: \"1e67631e-c93b-4515-bed4-6b4bf787eb57\") " pod="calico-system/whisker-5dd9b68968-72tft" Jan 14 01:06:25.987631 kubelet[2838]: I0114 01:06:25.987439 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e67631e-c93b-4515-bed4-6b4bf787eb57-whisker-ca-bundle\") pod \"whisker-5dd9b68968-72tft\" (UID: \"1e67631e-c93b-4515-bed4-6b4bf787eb57\") " pod="calico-system/whisker-5dd9b68968-72tft" Jan 14 01:06:25.987631 kubelet[2838]: I0114 01:06:25.987475 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1e67631e-c93b-4515-bed4-6b4bf787eb57-whisker-backend-key-pair\") pod \"whisker-5dd9b68968-72tft\" (UID: \"1e67631e-c93b-4515-bed4-6b4bf787eb57\") " pod="calico-system/whisker-5dd9b68968-72tft" Jan 14 01:06:26.255908 containerd[1606]: time="2026-01-14T01:06:26.255521341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd9b68968-72tft,Uid:1e67631e-c93b-4515-bed4-6b4bf787eb57,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:26.438456 systemd-networkd[1503]: calicde0e6e82f7: Link UP Jan 14 01:06:26.440554 systemd-networkd[1503]: calicde0e6e82f7: Gained carrier Jan 14 01:06:26.474971 containerd[1606]: 2026-01-14 01:06:26.296 [INFO][3910] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:06:26.474971 containerd[1606]: 2026-01-14 01:06:26.314 [INFO][3910] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0 whisker-5dd9b68968- calico-system 1e67631e-c93b-4515-bed4-6b4bf787eb57 903 0 2026-01-14 01:06:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5dd9b68968 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3 whisker-5dd9b68968-72tft eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicde0e6e82f7 [] [] }} ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Namespace="calico-system" Pod="whisker-5dd9b68968-72tft" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-" Jan 14 01:06:26.474971 containerd[1606]: 2026-01-14 01:06:26.314 [INFO][3910] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Namespace="calico-system" Pod="whisker-5dd9b68968-72tft" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" Jan 14 01:06:26.474971 containerd[1606]: 2026-01-14 01:06:26.349 [INFO][3922] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" HandleID="k8s-pod-network.a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" Jan 14 01:06:26.476647 containerd[1606]: 2026-01-14 01:06:26.349 [INFO][3922] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" HandleID="k8s-pod-network.a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", "pod":"whisker-5dd9b68968-72tft", "timestamp":"2026-01-14 01:06:26.349186333 +0000 UTC"}, Hostname:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:06:26.476647 containerd[1606]: 2026-01-14 01:06:26.349 [INFO][3922] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:06:26.476647 containerd[1606]: 2026-01-14 01:06:26.349 [INFO][3922] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:06:26.476647 containerd[1606]: 2026-01-14 01:06:26.349 [INFO][3922] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:06:26.476647 containerd[1606]: 2026-01-14 01:06:26.359 [INFO][3922] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:26.476647 containerd[1606]: 2026-01-14 01:06:26.365 [INFO][3922] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:26.476647 containerd[1606]: 2026-01-14 01:06:26.372 [INFO][3922] ipam/ipam.go 511: Trying affinity for 192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:26.476647 containerd[1606]: 2026-01-14 01:06:26.375 [INFO][3922] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:26.477135 containerd[1606]: 2026-01-14 01:06:26.377 [INFO][3922] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:26.477135 containerd[1606]: 2026-01-14 01:06:26.377 [INFO][3922] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:26.477135 containerd[1606]: 2026-01-14 01:06:26.380 [INFO][3922] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1 Jan 14 01:06:26.477135 containerd[1606]: 2026-01-14 01:06:26.394 [INFO][3922] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:26.477135 containerd[1606]: 2026-01-14 01:06:26.410 [INFO][3922] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.1/26] block=192.168.93.0/26 handle="k8s-pod-network.a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:26.477135 containerd[1606]: 2026-01-14 01:06:26.413 [INFO][3922] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.1/26] handle="k8s-pod-network.a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:26.477135 containerd[1606]: 2026-01-14 01:06:26.413 [INFO][3922] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:06:26.477135 containerd[1606]: 2026-01-14 01:06:26.413 [INFO][3922] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.1/26] IPv6=[] ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" HandleID="k8s-pod-network.a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" Jan 14 01:06:26.477731 containerd[1606]: 2026-01-14 01:06:26.420 [INFO][3910] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Namespace="calico-system" Pod="whisker-5dd9b68968-72tft" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0", GenerateName:"whisker-5dd9b68968-", Namespace:"calico-system", SelfLink:"", UID:"1e67631e-c93b-4515-bed4-6b4bf787eb57", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dd9b68968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"", Pod:"whisker-5dd9b68968-72tft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicde0e6e82f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:26.478628 containerd[1606]: 2026-01-14 01:06:26.420 [INFO][3910] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.1/32] ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Namespace="calico-system" Pod="whisker-5dd9b68968-72tft" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" Jan 14 01:06:26.478628 containerd[1606]: 2026-01-14 01:06:26.420 [INFO][3910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicde0e6e82f7 ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Namespace="calico-system" Pod="whisker-5dd9b68968-72tft" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" Jan 14 01:06:26.478628 containerd[1606]: 2026-01-14 01:06:26.439 [INFO][3910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Namespace="calico-system" Pod="whisker-5dd9b68968-72tft" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" Jan 14 01:06:26.479679 containerd[1606]: 2026-01-14 01:06:26.439 [INFO][3910] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Namespace="calico-system" Pod="whisker-5dd9b68968-72tft" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0", GenerateName:"whisker-5dd9b68968-", Namespace:"calico-system", SelfLink:"", UID:"1e67631e-c93b-4515-bed4-6b4bf787eb57", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dd9b68968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1", Pod:"whisker-5dd9b68968-72tft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicde0e6e82f7", MAC:"fe:f8:f4:77:d0:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:26.481449 containerd[1606]: 2026-01-14 01:06:26.469 [INFO][3910] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" Namespace="calico-system" Pod="whisker-5dd9b68968-72tft" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-whisker--5dd9b68968--72tft-eth0" Jan 14 01:06:26.540757 containerd[1606]: time="2026-01-14T01:06:26.540577510Z" level=info msg="connecting to shim a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1" address="unix:///run/containerd/s/e478e2d1745904c216b7d43703e90d4986b3c27116161539d6583fe96dd6e889" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:26.615345 systemd[1]: Started cri-containerd-a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1.scope - libcontainer container a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1. Jan 14 01:06:26.651000 audit: BPF prog-id=178 op=LOAD Jan 14 01:06:26.652000 audit: BPF prog-id=179 op=LOAD Jan 14 01:06:26.652000 audit[3959]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3948 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:26.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313938303665343662386132353232343565353032353766363864 Jan 14 01:06:26.652000 audit: BPF prog-id=179 op=UNLOAD Jan 14 01:06:26.652000 audit[3959]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:26.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313938303665343662386132353232343565353032353766363864 Jan 14 01:06:26.652000 audit: BPF prog-id=180 op=LOAD Jan 14 01:06:26.652000 audit[3959]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3948 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:26.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313938303665343662386132353232343565353032353766363864 Jan 14 01:06:26.652000 audit: BPF prog-id=181 op=LOAD Jan 14 01:06:26.652000 audit[3959]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3948 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:26.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313938303665343662386132353232343565353032353766363864 Jan 14 01:06:26.652000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:06:26.652000 audit[3959]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:26.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313938303665343662386132353232343565353032353766363864 Jan 14 01:06:26.653000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:06:26.653000 audit[3959]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:26.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313938303665343662386132353232343565353032353766363864 Jan 14 01:06:26.653000 audit: BPF prog-id=182 op=LOAD Jan 14 01:06:26.653000 audit[3959]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3948 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:26.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313938303665343662386132353232343565353032353766363864 Jan 14 01:06:26.742396 containerd[1606]: time="2026-01-14T01:06:26.742233914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd9b68968-72tft,Uid:1e67631e-c93b-4515-bed4-6b4bf787eb57,Namespace:calico-system,Attempt:0,} returns sandbox id \"a819806e46b8a252245e50257f68d220c58660e814c2f4239a391a112d45bfa1\"" Jan 14 01:06:26.748385 containerd[1606]: time="2026-01-14T01:06:26.748210735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:06:26.927722 containerd[1606]: time="2026-01-14T01:06:26.927383160Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:26.929337 containerd[1606]: time="2026-01-14T01:06:26.929237211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:06:26.929685 containerd[1606]: time="2026-01-14T01:06:26.929276032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:26.930758 kubelet[2838]: E0114 01:06:26.930148 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:06:26.930758 kubelet[2838]: E0114 01:06:26.930241 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:06:26.931730 kubelet[2838]: E0114 01:06:26.930456 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5e3e935377b84f50a445073ab6fc7676,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4zs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd9b68968-72tft_calico-system(1e67631e-c93b-4515-bed4-6b4bf787eb57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:26.934752 containerd[1606]: time="2026-01-14T01:06:26.934706166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:06:27.116427 containerd[1606]: time="2026-01-14T01:06:27.116188783Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:27.117917 containerd[1606]: time="2026-01-14T01:06:27.117782192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:06:27.118356 containerd[1606]: time="2026-01-14T01:06:27.118112419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:27.118576 kubelet[2838]: E0114 01:06:27.118530 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:06:27.119052 kubelet[2838]: E0114 01:06:27.118779 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:06:27.119175 kubelet[2838]: E0114 01:06:27.118981 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4zs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd9b68968-72tft_calico-system(1e67631e-c93b-4515-bed4-6b4bf787eb57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:27.120882 kubelet[2838]: E0114 01:06:27.120636 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd9b68968-72tft" podUID="1e67631e-c93b-4515-bed4-6b4bf787eb57" Jan 14 01:06:27.329000 audit: BPF prog-id=183 op=LOAD Jan 14 01:06:27.329000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9b791670 a2=98 a3=1fffffffffffffff items=0 ppid=3982 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.329000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:06:27.330000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:06:27.330000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff9b791640 a3=0 items=0 ppid=3982 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.330000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:06:27.330000 audit: BPF prog-id=184 op=LOAD Jan 14 01:06:27.330000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9b791550 a2=94 a3=3 items=0 ppid=3982 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.330000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:06:27.330000 audit: BPF prog-id=184 op=UNLOAD Jan 14 01:06:27.330000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9b791550 a2=94 a3=3 items=0 ppid=3982 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.330000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:06:27.330000 audit: BPF prog-id=185 op=LOAD Jan 14 01:06:27.330000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9b791590 a2=94 a3=7fff9b791770 items=0 ppid=3982 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.330000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:06:27.330000 audit: BPF prog-id=185 op=UNLOAD Jan 14 01:06:27.330000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9b791590 a2=94 a3=7fff9b791770 items=0 ppid=3982 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.330000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:06:27.331000 audit: BPF prog-id=186 op=LOAD Jan 14 01:06:27.331000 audit[4122]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8d51eca0 a2=98 a3=3 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.331000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.332000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:06:27.332000 audit[4122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd8d51ec70 a3=0 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.332000 audit: BPF prog-id=187 op=LOAD Jan 14 01:06:27.332000 audit[4122]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8d51ea90 a2=94 a3=54428f items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.332000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:06:27.332000 audit[4122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8d51ea90 a2=94 a3=54428f items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.332000 audit: BPF prog-id=188 op=LOAD Jan 14 01:06:27.332000 audit[4122]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8d51eac0 a2=94 a3=2 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.332000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:06:27.332000 audit[4122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8d51eac0 a2=0 a3=2 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.527743 kubelet[2838]: I0114 01:06:27.527651 2838 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8938f0a3-97c1-409e-8512-d0019d691e13" path="/var/lib/kubelet/pods/8938f0a3-97c1-409e-8512-d0019d691e13/volumes" Jan 14 01:06:27.528000 audit: BPF prog-id=189 op=LOAD Jan 14 01:06:27.528000 audit[4122]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8d51e980 a2=94 a3=1 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.528000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.528000 audit: BPF prog-id=189 op=UNLOAD Jan 14 01:06:27.528000 audit[4122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8d51e980 a2=94 a3=1 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.528000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.543000 audit: BPF prog-id=190 op=LOAD Jan 14 01:06:27.543000 audit[4122]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8d51e970 a2=94 a3=4 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.543000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.544000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:06:27.544000 audit[4122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8d51e970 a2=0 a3=4 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.544000 audit: BPF prog-id=191 op=LOAD Jan 14 01:06:27.544000 audit[4122]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd8d51e7d0 a2=94 a3=5 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.544000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:06:27.544000 audit[4122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd8d51e7d0 a2=0 a3=5 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.544000 audit: BPF prog-id=192 op=LOAD Jan 14 01:06:27.544000 audit[4122]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8d51e9f0 a2=94 a3=6 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.544000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:06:27.544000 audit[4122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8d51e9f0 a2=0 a3=6 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.545000 audit: BPF prog-id=193 op=LOAD Jan 14 01:06:27.545000 audit[4122]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8d51e1a0 a2=94 a3=88 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.545000 audit: BPF prog-id=194 op=LOAD Jan 14 01:06:27.545000 audit[4122]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd8d51e020 a2=94 a3=2 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.545000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:06:27.545000 audit[4122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd8d51e050 a2=0 a3=7ffd8d51e150 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.546000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:06:27.546000 audit[4122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=aab7d10 a2=0 a3=f87bacdf978b6534 items=0 ppid=3982 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.546000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:06:27.559000 audit: BPF prog-id=195 op=LOAD Jan 14 01:06:27.559000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbd1937b0 a2=98 a3=1999999999999999 items=0 ppid=3982 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.559000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:06:27.559000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:06:27.559000 audit[4125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdbd193780 a3=0 items=0 ppid=3982 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.559000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:06:27.559000 audit: BPF prog-id=196 op=LOAD Jan 14 01:06:27.559000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbd193690 a2=94 a3=ffff items=0 ppid=3982 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.559000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:06:27.559000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:06:27.559000 audit[4125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdbd193690 a2=94 a3=ffff items=0 ppid=3982 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.559000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:06:27.559000 audit: BPF prog-id=197 op=LOAD Jan 14 01:06:27.559000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbd1936d0 a2=94 a3=7ffdbd1938b0 items=0 ppid=3982 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.559000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:06:27.559000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:06:27.559000 audit[4125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdbd1936d0 a2=94 a3=7ffdbd1938b0 items=0 ppid=3982 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.559000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:06:27.659505 systemd-networkd[1503]: vxlan.calico: Link UP Jan 14 01:06:27.659519 systemd-networkd[1503]: vxlan.calico: Gained carrier Jan 14 01:06:27.694000 audit: BPF prog-id=198 op=LOAD Jan 14 01:06:27.694000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7cc76dc0 a2=98 a3=0 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.694000 audit: BPF prog-id=198 op=UNLOAD Jan 14 01:06:27.694000 audit[4152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff7cc76d90 a3=0 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.695000 audit: BPF prog-id=199 op=LOAD Jan 14 01:06:27.695000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7cc76bd0 a2=94 a3=54428f items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.695000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.695000 audit: BPF prog-id=199 op=UNLOAD Jan 14 01:06:27.695000 audit[4152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff7cc76bd0 a2=94 a3=54428f items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.695000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.695000 audit: BPF prog-id=200 op=LOAD Jan 14 01:06:27.695000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7cc76c00 a2=94 a3=2 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.695000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.695000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:06:27.695000 audit[4152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff7cc76c00 a2=0 a3=2 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.695000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.695000 audit: BPF prog-id=201 op=LOAD Jan 14 01:06:27.695000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff7cc769b0 a2=94 a3=4 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.695000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.695000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:06:27.695000 audit[4152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff7cc769b0 a2=94 a3=4 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.695000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.695000 audit: BPF prog-id=202 op=LOAD Jan 14 01:06:27.695000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff7cc76ab0 a2=94 a3=7fff7cc76c30 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.695000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.696000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:06:27.696000 audit[4152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff7cc76ab0 a2=0 a3=7fff7cc76c30 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.696000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.698000 audit: BPF prog-id=203 op=LOAD Jan 14 01:06:27.698000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff7cc761e0 a2=94 a3=2 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.698000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.698000 audit: BPF prog-id=203 op=UNLOAD Jan 14 01:06:27.698000 audit[4152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff7cc761e0 a2=0 a3=2 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.698000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.698000 audit: BPF prog-id=204 op=LOAD Jan 14 01:06:27.698000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff7cc762e0 a2=94 a3=30 items=0 ppid=3982 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.698000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:06:27.709000 audit: BPF prog-id=205 op=LOAD Jan 14 01:06:27.709000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9f0aabe0 a2=98 a3=0 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.709000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.710000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:06:27.710000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff9f0aabb0 a3=0 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.710000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.710000 audit: BPF prog-id=206 op=LOAD Jan 14 01:06:27.710000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9f0aa9d0 a2=94 a3=54428f items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.710000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.710000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:06:27.710000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9f0aa9d0 a2=94 a3=54428f items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.710000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.710000 audit: BPF prog-id=207 op=LOAD Jan 14 01:06:27.710000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9f0aaa00 a2=94 a3=2 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.710000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.710000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:06:27.710000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9f0aaa00 a2=0 a3=2 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.710000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.821323 kubelet[2838]: E0114 01:06:27.821260 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd9b68968-72tft" podUID="1e67631e-c93b-4515-bed4-6b4bf787eb57" Jan 14 01:06:27.864328 systemd-networkd[1503]: calicde0e6e82f7: Gained IPv6LL Jan 14 01:06:27.868000 audit[4160]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:27.868000 audit[4160]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc825382d0 a2=0 a3=7ffc825382bc items=0 ppid=3020 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:27.878000 audit[4160]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:27.878000 audit[4160]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc825382d0 a2=0 a3=0 items=0 ppid=3020 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:27.944000 audit: BPF prog-id=208 op=LOAD Jan 14 01:06:27.944000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9f0aa8c0 a2=94 a3=1 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.944000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.945000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:06:27.945000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9f0aa8c0 a2=94 a3=1 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.945000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.960000 audit: BPF prog-id=209 op=LOAD Jan 14 01:06:27.960000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9f0aa8b0 a2=94 a3=4 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.960000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:06:27.960000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff9f0aa8b0 a2=0 a3=4 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.960000 audit: BPF prog-id=210 op=LOAD Jan 14 01:06:27.960000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9f0aa710 a2=94 a3=5 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.960000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:06:27.960000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9f0aa710 a2=0 a3=5 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.960000 audit: BPF prog-id=211 op=LOAD Jan 14 01:06:27.960000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9f0aa930 a2=94 a3=6 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.960000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:06:27.960000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff9f0aa930 a2=0 a3=6 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.961000 audit: BPF prog-id=212 op=LOAD Jan 14 01:06:27.961000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9f0aa0e0 a2=94 a3=88 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.961000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.961000 audit: BPF prog-id=213 op=LOAD Jan 14 01:06:27.961000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff9f0a9f60 a2=94 a3=2 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.961000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.961000 audit: BPF prog-id=213 op=UNLOAD Jan 14 01:06:27.961000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff9f0a9f90 a2=0 a3=7fff9f0aa090 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.961000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.962000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:06:27.962000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=68e3d10 a2=0 a3=6b90826a3c403d71 items=0 ppid=3982 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.962000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:06:27.970000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:06:27.970000 audit[3982]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000df6500 a2=0 a3=0 items=0 ppid=3971 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:27.970000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:06:28.051000 audit[4182]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:28.051000 audit[4182]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff3f1dd0e0 a2=0 a3=7fff3f1dd0cc items=0 ppid=3982 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:28.051000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:28.054000 audit[4183]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=4183 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:28.054000 audit[4183]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffee3dd1f90 a2=0 a3=7ffee3dd1f7c items=0 ppid=3982 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:28.054000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:28.065000 audit[4180]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4180 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:28.065000 audit[4180]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe90c6a920 a2=0 a3=7ffe90c6a90c items=0 ppid=3982 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:28.065000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:28.073000 audit[4186]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4186 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:28.073000 audit[4186]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffca0e3e700 a2=0 a3=7ffca0e3e6ec items=0 ppid=3982 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:28.073000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:29.521839 containerd[1606]: time="2026-01-14T01:06:29.521500416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5j8fj,Uid:5c1341b7-068c-4ccd-ba99-cbf173a0144f,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:29.522415 containerd[1606]: time="2026-01-14T01:06:29.521997213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599f9958f6-q8lcg,Uid:4615652e-e23f-4213-b02a-57b7f1d1c9f0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:06:29.592469 systemd-networkd[1503]: vxlan.calico: Gained IPv6LL Jan 14 01:06:29.750830 systemd-networkd[1503]: cali28b7c97d0fd: Link UP Jan 14 01:06:29.757320 systemd-networkd[1503]: cali28b7c97d0fd: Gained carrier Jan 14 01:06:29.780182 containerd[1606]: 2026-01-14 01:06:29.613 [INFO][4197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0 goldmane-666569f655- calico-system 5c1341b7-068c-4ccd-ba99-cbf173a0144f 838 0 2026-01-14 01:06:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3 goldmane-666569f655-5j8fj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali28b7c97d0fd [] [] }} ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Namespace="calico-system" Pod="goldmane-666569f655-5j8fj" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-" Jan 14 01:06:29.780182 containerd[1606]: 2026-01-14 01:06:29.614 [INFO][4197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Namespace="calico-system" Pod="goldmane-666569f655-5j8fj" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" Jan 14 01:06:29.780182 containerd[1606]: 2026-01-14 01:06:29.666 [INFO][4221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" HandleID="k8s-pod-network.f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" Jan 14 01:06:29.780472 containerd[1606]: 2026-01-14 01:06:29.667 [INFO][4221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" HandleID="k8s-pod-network.f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", "pod":"goldmane-666569f655-5j8fj", "timestamp":"2026-01-14 01:06:29.666863332 +0000 UTC"}, Hostname:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:06:29.780472 containerd[1606]: 2026-01-14 01:06:29.667 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:06:29.780472 containerd[1606]: 2026-01-14 01:06:29.667 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:06:29.780472 containerd[1606]: 2026-01-14 01:06:29.667 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:06:29.780472 containerd[1606]: 2026-01-14 01:06:29.680 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.780472 containerd[1606]: 2026-01-14 01:06:29.693 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.780472 containerd[1606]: 2026-01-14 01:06:29.710 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.780472 containerd[1606]: 2026-01-14 01:06:29.718 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.780968 containerd[1606]: 2026-01-14 01:06:29.722 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.780968 containerd[1606]: 2026-01-14 01:06:29.722 [INFO][4221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.780968 containerd[1606]: 2026-01-14 01:06:29.724 [INFO][4221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04 Jan 14 01:06:29.780968 containerd[1606]: 2026-01-14 01:06:29.733 [INFO][4221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.780968 containerd[1606]: 2026-01-14 01:06:29.742 [INFO][4221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.2/26] block=192.168.93.0/26 handle="k8s-pod-network.f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.780968 containerd[1606]: 2026-01-14 01:06:29.742 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.2/26] handle="k8s-pod-network.f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.780968 containerd[1606]: 2026-01-14 01:06:29.742 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:06:29.780968 containerd[1606]: 2026-01-14 01:06:29.743 [INFO][4221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.2/26] IPv6=[] ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" HandleID="k8s-pod-network.f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" Jan 14 01:06:29.781345 containerd[1606]: 2026-01-14 01:06:29.747 [INFO][4197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Namespace="calico-system" Pod="goldmane-666569f655-5j8fj" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5c1341b7-068c-4ccd-ba99-cbf173a0144f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"", Pod:"goldmane-666569f655-5j8fj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali28b7c97d0fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:29.781473 containerd[1606]: 2026-01-14 01:06:29.747 [INFO][4197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.2/32] ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Namespace="calico-system" Pod="goldmane-666569f655-5j8fj" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" Jan 14 01:06:29.781473 containerd[1606]: 2026-01-14 01:06:29.747 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28b7c97d0fd ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Namespace="calico-system" Pod="goldmane-666569f655-5j8fj" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" Jan 14 01:06:29.781473 containerd[1606]: 2026-01-14 01:06:29.751 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Namespace="calico-system" Pod="goldmane-666569f655-5j8fj" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" Jan 14 01:06:29.781615 containerd[1606]: 2026-01-14 01:06:29.752 [INFO][4197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Namespace="calico-system" Pod="goldmane-666569f655-5j8fj" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5c1341b7-068c-4ccd-ba99-cbf173a0144f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04", Pod:"goldmane-666569f655-5j8fj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali28b7c97d0fd", MAC:"2a:e4:59:0e:5e:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:29.781735 containerd[1606]: 2026-01-14 01:06:29.777 [INFO][4197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" Namespace="calico-system" Pod="goldmane-666569f655-5j8fj" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-goldmane--666569f655--5j8fj-eth0" Jan 14 01:06:29.856536 containerd[1606]: time="2026-01-14T01:06:29.856419735Z" level=info msg="connecting to shim f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04" address="unix:///run/containerd/s/fd1427c9ec62b00ccb4a1041dfc6ffb63c686147f861c499a9d63f737580abf2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:29.867942 systemd-networkd[1503]: cali2546e5c6555: Link UP Jan 14 01:06:29.870014 systemd-networkd[1503]: cali2546e5c6555: Gained carrier Jan 14 01:06:29.911415 containerd[1606]: 2026-01-14 01:06:29.617 [INFO][4199] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0 calico-apiserver-599f9958f6- calico-apiserver 4615652e-e23f-4213-b02a-57b7f1d1c9f0 836 0 2026-01-14 01:05:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599f9958f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3 calico-apiserver-599f9958f6-q8lcg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2546e5c6555 [] [] }} ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-q8lcg" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-" Jan 14 01:06:29.911415 containerd[1606]: 2026-01-14 01:06:29.618 [INFO][4199] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-q8lcg" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" Jan 14 01:06:29.911415 containerd[1606]: 2026-01-14 01:06:29.719 [INFO][4223] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" HandleID="k8s-pod-network.cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" Jan 14 01:06:29.911761 containerd[1606]: 2026-01-14 01:06:29.720 [INFO][4223] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" HandleID="k8s-pod-network.cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b7480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", "pod":"calico-apiserver-599f9958f6-q8lcg", "timestamp":"2026-01-14 01:06:29.719034416 +0000 UTC"}, Hostname:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:06:29.911761 containerd[1606]: 2026-01-14 01:06:29.720 [INFO][4223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:06:29.911761 containerd[1606]: 2026-01-14 01:06:29.742 [INFO][4223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:06:29.911761 containerd[1606]: 2026-01-14 01:06:29.742 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:06:29.911761 containerd[1606]: 2026-01-14 01:06:29.785 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.911761 containerd[1606]: 2026-01-14 01:06:29.798 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.911761 containerd[1606]: 2026-01-14 01:06:29.811 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.911761 containerd[1606]: 2026-01-14 01:06:29.823 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.913594 containerd[1606]: 2026-01-14 01:06:29.831 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.913594 containerd[1606]: 2026-01-14 01:06:29.831 [INFO][4223] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.913594 containerd[1606]: 2026-01-14 01:06:29.836 [INFO][4223] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155 Jan 14 01:06:29.913594 containerd[1606]: 2026-01-14 01:06:29.846 [INFO][4223] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.913594 containerd[1606]: 2026-01-14 01:06:29.857 [INFO][4223] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.3/26] block=192.168.93.0/26 handle="k8s-pod-network.cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.913594 containerd[1606]: 2026-01-14 01:06:29.857 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.3/26] handle="k8s-pod-network.cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:29.913594 containerd[1606]: 2026-01-14 01:06:29.858 [INFO][4223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:06:29.913594 containerd[1606]: 2026-01-14 01:06:29.858 [INFO][4223] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.3/26] IPv6=[] ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" HandleID="k8s-pod-network.cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" Jan 14 01:06:29.916549 containerd[1606]: 2026-01-14 01:06:29.861 [INFO][4199] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-q8lcg" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0", GenerateName:"calico-apiserver-599f9958f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4615652e-e23f-4213-b02a-57b7f1d1c9f0", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599f9958f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"", Pod:"calico-apiserver-599f9958f6-q8lcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2546e5c6555", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:29.916677 containerd[1606]: 2026-01-14 01:06:29.862 [INFO][4199] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.3/32] ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-q8lcg" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" Jan 14 01:06:29.916677 containerd[1606]: 2026-01-14 01:06:29.862 [INFO][4199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2546e5c6555 ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-q8lcg" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" Jan 14 01:06:29.916677 containerd[1606]: 2026-01-14 01:06:29.871 [INFO][4199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-q8lcg" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" Jan 14 01:06:29.916831 containerd[1606]: 2026-01-14 01:06:29.874 [INFO][4199] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-q8lcg" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0", GenerateName:"calico-apiserver-599f9958f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4615652e-e23f-4213-b02a-57b7f1d1c9f0", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599f9958f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155", Pod:"calico-apiserver-599f9958f6-q8lcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2546e5c6555", MAC:"02:a3:2f:44:02:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:29.916964 containerd[1606]: 2026-01-14 01:06:29.900 [INFO][4199] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-q8lcg" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--q8lcg-eth0" Jan 14 01:06:29.945020 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 14 01:06:29.945702 kernel: audit: type=1325 audit(1768352789.921:643): table=filter:127 family=2 entries=44 op=nft_register_chain pid=4262 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:29.921000 audit[4262]: NETFILTER_CFG table=filter:127 family=2 entries=44 op=nft_register_chain pid=4262 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:29.921000 audit[4262]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffce3381c50 a2=0 a3=7ffce3381c3c items=0 ppid=3982 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:29.980146 kernel: audit: type=1300 audit(1768352789.921:643): arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffce3381c50 a2=0 a3=7ffce3381c3c items=0 ppid=3982 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:29.921000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:30.007100 kernel: audit: type=1327 audit(1768352789.921:643): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:30.009499 systemd[1]: Started cri-containerd-f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04.scope - libcontainer container f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04. Jan 14 01:06:30.048301 containerd[1606]: time="2026-01-14T01:06:30.043202672Z" level=info msg="connecting to shim cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155" address="unix:///run/containerd/s/52b90d71abcb8a663fd3679b1bc675dcc0a7664d99cd03147733537f7059eeaa" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:30.084000 audit[4283]: NETFILTER_CFG table=filter:128 family=2 entries=60 op=nft_register_chain pid=4283 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:30.103113 kernel: audit: type=1325 audit(1768352790.084:644): table=filter:128 family=2 entries=60 op=nft_register_chain pid=4283 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:30.084000 audit[4283]: SYSCALL arch=c000003e syscall=46 success=yes exit=32248 a0=3 a1=7fffce87abf0 a2=0 a3=7fffce87abdc items=0 ppid=3982 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.140704 kernel: audit: type=1300 audit(1768352790.084:644): arch=c000003e syscall=46 success=yes exit=32248 a0=3 a1=7fffce87abf0 a2=0 a3=7fffce87abdc items=0 ppid=3982 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.084000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:30.164740 kernel: audit: type=1327 audit(1768352790.084:644): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:30.168256 systemd[1]: Started cri-containerd-cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155.scope - libcontainer container cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155. Jan 14 01:06:30.182000 audit: BPF prog-id=214 op=LOAD Jan 14 01:06:30.199351 kernel: audit: type=1334 audit(1768352790.182:645): prog-id=214 op=LOAD Jan 14 01:06:30.199465 kernel: audit: type=1334 audit(1768352790.184:646): prog-id=215 op=LOAD Jan 14 01:06:30.184000 audit: BPF prog-id=215 op=LOAD Jan 14 01:06:30.184000 audit[4266]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4251 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.229567 kernel: audit: type=1300 audit(1768352790.184:646): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4251 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639636139636634643336313637653663303862653933623861626562 Jan 14 01:06:30.261115 kernel: audit: type=1327 audit(1768352790.184:646): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639636139636634643336313637653663303862653933623861626562 Jan 14 01:06:30.184000 audit: BPF prog-id=215 op=UNLOAD Jan 14 01:06:30.184000 audit[4266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4251 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639636139636634643336313637653663303862653933623861626562 Jan 14 01:06:30.184000 audit: BPF prog-id=216 op=LOAD Jan 14 01:06:30.184000 audit[4266]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4251 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639636139636634643336313637653663303862653933623861626562 Jan 14 01:06:30.184000 audit: BPF prog-id=217 op=LOAD Jan 14 01:06:30.184000 audit[4266]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4251 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639636139636634643336313637653663303862653933623861626562 Jan 14 01:06:30.184000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:06:30.184000 audit[4266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4251 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639636139636634643336313637653663303862653933623861626562 Jan 14 01:06:30.184000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:06:30.184000 audit[4266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4251 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639636139636634643336313637653663303862653933623861626562 Jan 14 01:06:30.184000 audit: BPF prog-id=218 op=LOAD Jan 14 01:06:30.184000 audit[4266]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4251 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639636139636634643336313637653663303862653933623861626562 Jan 14 01:06:30.215000 audit: BPF prog-id=219 op=LOAD Jan 14 01:06:30.216000 audit: BPF prog-id=220 op=LOAD Jan 14 01:06:30.216000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4295 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363323234643466643662633635303831386263343762373266363838 Jan 14 01:06:30.216000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:06:30.216000 audit[4311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4295 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363323234643466643662633635303831386263343762373266363838 Jan 14 01:06:30.216000 audit: BPF prog-id=221 op=LOAD Jan 14 01:06:30.216000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4295 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363323234643466643662633635303831386263343762373266363838 Jan 14 01:06:30.216000 audit: BPF prog-id=222 op=LOAD Jan 14 01:06:30.216000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4295 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363323234643466643662633635303831386263343762373266363838 Jan 14 01:06:30.216000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:06:30.216000 audit[4311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4295 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363323234643466643662633635303831386263343762373266363838 Jan 14 01:06:30.216000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:06:30.216000 audit[4311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4295 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363323234643466643662633635303831386263343762373266363838 Jan 14 01:06:30.216000 audit: BPF prog-id=223 op=LOAD Jan 14 01:06:30.216000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4295 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363323234643466643662633635303831386263343762373266363838 Jan 14 01:06:30.327002 containerd[1606]: time="2026-01-14T01:06:30.326809700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5j8fj,Uid:5c1341b7-068c-4ccd-ba99-cbf173a0144f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9ca9cf4d36167e6c08be93b8abeb553c42d33d511026512e65bb21c4b0f5d04\"" Jan 14 01:06:30.334104 containerd[1606]: time="2026-01-14T01:06:30.333945224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:06:30.393750 containerd[1606]: time="2026-01-14T01:06:30.393522384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599f9958f6-q8lcg,Uid:4615652e-e23f-4213-b02a-57b7f1d1c9f0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cc224d4fd6bc650818bc47b72f68872efef2b70f152a530e5f583d00b5615155\"" Jan 14 01:06:30.516617 containerd[1606]: time="2026-01-14T01:06:30.516530460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:30.518202 containerd[1606]: time="2026-01-14T01:06:30.518115026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:06:30.518202 containerd[1606]: time="2026-01-14T01:06:30.518164701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:30.518572 kubelet[2838]: E0114 01:06:30.518437 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:06:30.518572 kubelet[2838]: E0114 01:06:30.518502 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:06:30.521637 kubelet[2838]: E0114 01:06:30.518850 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t49pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5j8fj_calico-system(5c1341b7-068c-4ccd-ba99-cbf173a0144f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:30.521637 kubelet[2838]: E0114 01:06:30.520282 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:06:30.522269 containerd[1606]: time="2026-01-14T01:06:30.519750328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:06:30.524378 containerd[1606]: time="2026-01-14T01:06:30.524253715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79fc68bf94-hv66g,Uid:84272add-ebcc-42f7-bffa-247234ecb849,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:30.524618 containerd[1606]: time="2026-01-14T01:06:30.524256241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4qzsv,Uid:8fab71db-e780-4c15-9007-1a003926cb63,Namespace:kube-system,Attempt:0,}" Jan 14 01:06:30.703353 containerd[1606]: time="2026-01-14T01:06:30.702145394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:30.706787 containerd[1606]: time="2026-01-14T01:06:30.706559218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:06:30.707576 containerd[1606]: time="2026-01-14T01:06:30.706623258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:30.707963 kubelet[2838]: E0114 01:06:30.707897 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:06:30.708307 kubelet[2838]: E0114 01:06:30.707974 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:06:30.708307 kubelet[2838]: E0114 01:06:30.708236 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599f9958f6-q8lcg_calico-apiserver(4615652e-e23f-4213-b02a-57b7f1d1c9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:30.711143 kubelet[2838]: E0114 01:06:30.710318 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:06:30.796628 systemd-networkd[1503]: cali381a4ec8b91: Link UP Jan 14 01:06:30.800356 systemd-networkd[1503]: cali381a4ec8b91: Gained carrier Jan 14 01:06:30.828349 containerd[1606]: 2026-01-14 01:06:30.650 [INFO][4346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0 coredns-674b8bbfcf- kube-system 8fab71db-e780-4c15-9007-1a003926cb63 832 0 2026-01-14 01:05:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3 coredns-674b8bbfcf-4qzsv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali381a4ec8b91 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-4qzsv" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-" Jan 14 01:06:30.828349 containerd[1606]: 2026-01-14 01:06:30.650 [INFO][4346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-4qzsv" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" Jan 14 01:06:30.828349 containerd[1606]: 2026-01-14 01:06:30.741 [INFO][4373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" HandleID="k8s-pod-network.28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" Jan 14 01:06:30.828699 containerd[1606]: 2026-01-14 01:06:30.743 [INFO][4373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" HandleID="k8s-pod-network.28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032f4c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", "pod":"coredns-674b8bbfcf-4qzsv", "timestamp":"2026-01-14 01:06:30.741756025 +0000 UTC"}, Hostname:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:06:30.828699 containerd[1606]: 2026-01-14 01:06:30.743 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:06:30.828699 containerd[1606]: 2026-01-14 01:06:30.743 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:06:30.828699 containerd[1606]: 2026-01-14 01:06:30.743 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:06:30.828699 containerd[1606]: 2026-01-14 01:06:30.754 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:30.828699 containerd[1606]: 2026-01-14 01:06:30.759 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:30.828699 containerd[1606]: 2026-01-14 01:06:30.766 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:30.828699 containerd[1606]: 2026-01-14 01:06:30.769 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:30.829141 containerd[1606]: 2026-01-14 01:06:30.771 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:30.829141 containerd[1606]: 2026-01-14 01:06:30.771 [INFO][4373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:30.829141 containerd[1606]: 2026-01-14 01:06:30.773 [INFO][4373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd Jan 14 01:06:30.829141 containerd[1606]: 2026-01-14 01:06:30.778 [INFO][4373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:30.829141 containerd[1606]: 2026-01-14 01:06:30.786 [INFO][4373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.4/26] block=192.168.93.0/26 handle="k8s-pod-network.28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:30.829141 containerd[1606]: 2026-01-14 01:06:30.786 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.4/26] handle="k8s-pod-network.28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:30.829141 containerd[1606]: 2026-01-14 01:06:30.786 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:06:30.829141 containerd[1606]: 2026-01-14 01:06:30.786 [INFO][4373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.4/26] IPv6=[] ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" HandleID="k8s-pod-network.28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" Jan 14 01:06:30.829538 containerd[1606]: 2026-01-14 01:06:30.790 [INFO][4346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-4qzsv" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8fab71db-e780-4c15-9007-1a003926cb63", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"", Pod:"coredns-674b8bbfcf-4qzsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali381a4ec8b91", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:30.829538 containerd[1606]: 2026-01-14 01:06:30.790 [INFO][4346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.4/32] ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-4qzsv" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" Jan 14 01:06:30.829538 containerd[1606]: 2026-01-14 01:06:30.790 [INFO][4346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali381a4ec8b91 ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-4qzsv" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" Jan 14 01:06:30.829538 containerd[1606]: 2026-01-14 01:06:30.798 [INFO][4346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-4qzsv" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" Jan 14 01:06:30.829538 containerd[1606]: 2026-01-14 01:06:30.800 [INFO][4346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-4qzsv" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8fab71db-e780-4c15-9007-1a003926cb63", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd", Pod:"coredns-674b8bbfcf-4qzsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali381a4ec8b91", MAC:"0e:29:a4:80:4a:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:30.829538 containerd[1606]: 2026-01-14 01:06:30.820 [INFO][4346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-4qzsv" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--4qzsv-eth0" Jan 14 01:06:30.833305 kubelet[2838]: E0114 01:06:30.833255 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:06:30.836144 kubelet[2838]: E0114 01:06:30.836059 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:06:30.910040 containerd[1606]: time="2026-01-14T01:06:30.909887053Z" level=info msg="connecting to shim 28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd" address="unix:///run/containerd/s/fca857f5051b603a9003ee1dc34d74d273ee006ec61eea9cdf50e4d9aede6c1c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:30.967586 systemd-networkd[1503]: calie2bf21d32c7: Link UP Jan 14 01:06:30.967000 audit[4423]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:30.967000 audit[4423]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd64638e70 a2=0 a3=7ffd64638e5c items=0 ppid=3020 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:30.969820 systemd-networkd[1503]: calie2bf21d32c7: Gained carrier Jan 14 01:06:30.975000 audit[4423]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:30.975000 audit[4423]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd64638e70 a2=0 a3=0 items=0 ppid=3020 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:30.993630 systemd[1]: Started cri-containerd-28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd.scope - libcontainer container 28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd. Jan 14 01:06:30.995000 audit[4432]: NETFILTER_CFG table=filter:131 family=2 entries=46 op=nft_register_chain pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:30.995000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=23724 a0=3 a1=7ffcb986a950 a2=0 a3=7ffcb986a93c items=0 ppid=3982 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:30.995000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.659 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0 calico-kube-controllers-79fc68bf94- calico-system 84272add-ebcc-42f7-bffa-247234ecb849 835 0 2026-01-14 01:06:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79fc68bf94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3 calico-kube-controllers-79fc68bf94-hv66g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie2bf21d32c7 [] [] }} ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Namespace="calico-system" Pod="calico-kube-controllers-79fc68bf94-hv66g" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.659 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Namespace="calico-system" Pod="calico-kube-controllers-79fc68bf94-hv66g" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.749 [INFO][4378] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" HandleID="k8s-pod-network.1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.750 [INFO][4378] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" HandleID="k8s-pod-network.1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cea20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", "pod":"calico-kube-controllers-79fc68bf94-hv66g", "timestamp":"2026-01-14 01:06:30.749966328 +0000 UTC"}, Hostname:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.750 [INFO][4378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.786 [INFO][4378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.786 [INFO][4378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.855 [INFO][4378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.880 [INFO][4378] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.897 [INFO][4378] ipam/ipam.go 511: Trying affinity for 192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.905 [INFO][4378] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.913 [INFO][4378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.913 [INFO][4378] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.916 [INFO][4378] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59 Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.927 [INFO][4378] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.949 [INFO][4378] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.5/26] block=192.168.93.0/26 handle="k8s-pod-network.1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.954 [INFO][4378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.5/26] handle="k8s-pod-network.1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.955 [INFO][4378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:06:31.012638 containerd[1606]: 2026-01-14 01:06:30.955 [INFO][4378] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.5/26] IPv6=[] ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" HandleID="k8s-pod-network.1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" Jan 14 01:06:31.013782 containerd[1606]: 2026-01-14 01:06:30.958 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Namespace="calico-system" Pod="calico-kube-controllers-79fc68bf94-hv66g" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0", GenerateName:"calico-kube-controllers-79fc68bf94-", Namespace:"calico-system", SelfLink:"", UID:"84272add-ebcc-42f7-bffa-247234ecb849", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79fc68bf94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"", Pod:"calico-kube-controllers-79fc68bf94-hv66g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie2bf21d32c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:31.013782 containerd[1606]: 2026-01-14 01:06:30.958 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.5/32] ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Namespace="calico-system" Pod="calico-kube-controllers-79fc68bf94-hv66g" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" Jan 14 01:06:31.013782 containerd[1606]: 2026-01-14 01:06:30.958 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2bf21d32c7 ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Namespace="calico-system" Pod="calico-kube-controllers-79fc68bf94-hv66g" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" Jan 14 01:06:31.013782 containerd[1606]: 2026-01-14 01:06:30.974 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Namespace="calico-system" Pod="calico-kube-controllers-79fc68bf94-hv66g" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" Jan 14 01:06:31.013782 containerd[1606]: 2026-01-14 01:06:30.978 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Namespace="calico-system" Pod="calico-kube-controllers-79fc68bf94-hv66g" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0", GenerateName:"calico-kube-controllers-79fc68bf94-", Namespace:"calico-system", SelfLink:"", UID:"84272add-ebcc-42f7-bffa-247234ecb849", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79fc68bf94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59", Pod:"calico-kube-controllers-79fc68bf94-hv66g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie2bf21d32c7", MAC:"ee:f3:00:79:77:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:31.013782 containerd[1606]: 2026-01-14 01:06:31.006 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" Namespace="calico-system" Pod="calico-kube-controllers-79fc68bf94-hv66g" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--kube--controllers--79fc68bf94--hv66g-eth0" Jan 14 01:06:31.026000 audit[4440]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:31.026000 audit[4440]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc28138570 a2=0 a3=7ffc2813855c items=0 ppid=3020 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:31.031000 audit[4440]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:31.031000 audit[4440]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc28138570 a2=0 a3=0 items=0 ppid=3020 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:31.062000 audit: BPF prog-id=224 op=LOAD Jan 14 01:06:31.064000 audit: BPF prog-id=225 op=LOAD Jan 14 01:06:31.064000 audit[4418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4404 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238656432373166316563323466663835306265383139393166393534 Jan 14 01:06:31.064000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:06:31.064000 audit[4418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4404 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238656432373166316563323466663835306265383139393166393534 Jan 14 01:06:31.065000 audit: BPF prog-id=226 op=LOAD Jan 14 01:06:31.065000 audit[4418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4404 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238656432373166316563323466663835306265383139393166393534 Jan 14 01:06:31.066000 audit: BPF prog-id=227 op=LOAD Jan 14 01:06:31.066000 audit[4418]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4404 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238656432373166316563323466663835306265383139393166393534 Jan 14 01:06:31.066000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:06:31.066000 audit[4418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4404 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238656432373166316563323466663835306265383139393166393534 Jan 14 01:06:31.067000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:06:31.067000 audit[4418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4404 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238656432373166316563323466663835306265383139393166393534 Jan 14 01:06:31.068000 audit: BPF prog-id=228 op=LOAD Jan 14 01:06:31.068000 audit[4418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4404 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238656432373166316563323466663835306265383139393166393534 Jan 14 01:06:31.076445 containerd[1606]: time="2026-01-14T01:06:31.076376015Z" level=info msg="connecting to shim 1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59" address="unix:///run/containerd/s/41be1e4336daaf2999233a47c7ff9428709ff305cfaac0cb606282667f4ed25b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:31.100000 audit[4465]: NETFILTER_CFG table=filter:134 family=2 entries=44 op=nft_register_chain pid=4465 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:31.100000 audit[4465]: SYSCALL arch=c000003e syscall=46 success=yes exit=21936 a0=3 a1=7ffea5ff9600 a2=0 a3=7ffea5ff95ec items=0 ppid=3982 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.100000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:31.128272 systemd-networkd[1503]: cali2546e5c6555: Gained IPv6LL Jan 14 01:06:31.142428 systemd[1]: Started cri-containerd-1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59.scope - libcontainer container 1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59. Jan 14 01:06:31.169020 containerd[1606]: time="2026-01-14T01:06:31.168945216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4qzsv,Uid:8fab71db-e780-4c15-9007-1a003926cb63,Namespace:kube-system,Attempt:0,} returns sandbox id \"28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd\"" Jan 14 01:06:31.177805 containerd[1606]: time="2026-01-14T01:06:31.177747925Z" level=info msg="CreateContainer within sandbox \"28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:06:31.183000 audit: BPF prog-id=229 op=LOAD Jan 14 01:06:31.185000 audit: BPF prog-id=230 op=LOAD Jan 14 01:06:31.185000 audit[4468]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4455 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163643935306630636432396535353031666535623130396239303234 Jan 14 01:06:31.185000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:06:31.185000 audit[4468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4455 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163643935306630636432396535353031666535623130396239303234 Jan 14 01:06:31.185000 audit: BPF prog-id=231 op=LOAD Jan 14 01:06:31.185000 audit[4468]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4455 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163643935306630636432396535353031666535623130396239303234 Jan 14 01:06:31.185000 audit: BPF prog-id=232 op=LOAD Jan 14 01:06:31.185000 audit[4468]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4455 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163643935306630636432396535353031666535623130396239303234 Jan 14 01:06:31.185000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:06:31.185000 audit[4468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4455 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163643935306630636432396535353031666535623130396239303234 Jan 14 01:06:31.185000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:06:31.185000 audit[4468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4455 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163643935306630636432396535353031666535623130396239303234 Jan 14 01:06:31.185000 audit: BPF prog-id=233 op=LOAD Jan 14 01:06:31.185000 audit[4468]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4455 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163643935306630636432396535353031666535623130396239303234 Jan 14 01:06:31.190399 containerd[1606]: time="2026-01-14T01:06:31.190326093Z" level=info msg="Container f09d02ddacdd2bf3fedd5993a5de86b8740e8dfb5c81ade41169da8185c03624: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:06:31.201216 containerd[1606]: time="2026-01-14T01:06:31.201158568Z" level=info msg="CreateContainer within sandbox \"28ed271f1ec24ff850be81991f954d76d19592d617ba87ad9b5fa3763ca884cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f09d02ddacdd2bf3fedd5993a5de86b8740e8dfb5c81ade41169da8185c03624\"" Jan 14 01:06:31.202947 containerd[1606]: time="2026-01-14T01:06:31.202581251Z" level=info msg="StartContainer for \"f09d02ddacdd2bf3fedd5993a5de86b8740e8dfb5c81ade41169da8185c03624\"" Jan 14 01:06:31.204440 containerd[1606]: time="2026-01-14T01:06:31.204400706Z" level=info msg="connecting to shim f09d02ddacdd2bf3fedd5993a5de86b8740e8dfb5c81ade41169da8185c03624" address="unix:///run/containerd/s/fca857f5051b603a9003ee1dc34d74d273ee006ec61eea9cdf50e4d9aede6c1c" protocol=ttrpc version=3 Jan 14 01:06:31.237400 systemd[1]: Started cri-containerd-f09d02ddacdd2bf3fedd5993a5de86b8740e8dfb5c81ade41169da8185c03624.scope - libcontainer container f09d02ddacdd2bf3fedd5993a5de86b8740e8dfb5c81ade41169da8185c03624. Jan 14 01:06:31.256375 systemd-networkd[1503]: cali28b7c97d0fd: Gained IPv6LL Jan 14 01:06:31.267000 audit: BPF prog-id=234 op=LOAD Jan 14 01:06:31.268000 audit: BPF prog-id=235 op=LOAD Jan 14 01:06:31.268000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4404 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630396430326464616364643262663366656464353939336135646538 Jan 14 01:06:31.268000 audit: BPF prog-id=235 op=UNLOAD Jan 14 01:06:31.268000 audit[4493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4404 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630396430326464616364643262663366656464353939336135646538 Jan 14 01:06:31.270609 containerd[1606]: time="2026-01-14T01:06:31.270313380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79fc68bf94-hv66g,Uid:84272add-ebcc-42f7-bffa-247234ecb849,Namespace:calico-system,Attempt:0,} returns sandbox id \"1cd950f0cd29e5501fe5b109b902470ceb97a209ce475fccd729ff1c3ba22e59\"" Jan 14 01:06:31.269000 audit: BPF prog-id=236 op=LOAD Jan 14 01:06:31.269000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4404 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630396430326464616364643262663366656464353939336135646538 Jan 14 01:06:31.270000 audit: BPF prog-id=237 op=LOAD Jan 14 01:06:31.270000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4404 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630396430326464616364643262663366656464353939336135646538 Jan 14 01:06:31.270000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:06:31.270000 audit[4493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4404 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630396430326464616364643262663366656464353939336135646538 Jan 14 01:06:31.270000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:06:31.270000 audit[4493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4404 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630396430326464616364643262663366656464353939336135646538 Jan 14 01:06:31.270000 audit: BPF prog-id=238 op=LOAD Jan 14 01:06:31.270000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4404 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630396430326464616364643262663366656464353939336135646538 Jan 14 01:06:31.288876 containerd[1606]: time="2026-01-14T01:06:31.288819160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:06:31.323234 containerd[1606]: time="2026-01-14T01:06:31.323052456Z" level=info msg="StartContainer for \"f09d02ddacdd2bf3fedd5993a5de86b8740e8dfb5c81ade41169da8185c03624\" returns successfully" Jan 14 01:06:31.460883 containerd[1606]: time="2026-01-14T01:06:31.460822606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:31.463029 containerd[1606]: time="2026-01-14T01:06:31.462758521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:06:31.463325 containerd[1606]: time="2026-01-14T01:06:31.462815933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:31.463679 kubelet[2838]: E0114 01:06:31.463633 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:06:31.463901 kubelet[2838]: E0114 01:06:31.463857 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:06:31.464522 kubelet[2838]: E0114 01:06:31.464415 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5vq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79fc68bf94-hv66g_calico-system(84272add-ebcc-42f7-bffa-247234ecb849): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:31.465737 kubelet[2838]: E0114 01:06:31.465649 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:06:31.523779 containerd[1606]: time="2026-01-14T01:06:31.523345871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jgpk9,Uid:addca0d5-666f-4204-b79f-467069ad43cd,Namespace:kube-system,Attempt:0,}" Jan 14 01:06:31.724479 systemd-networkd[1503]: cali114044c2077: Link UP Jan 14 01:06:31.726250 systemd-networkd[1503]: cali114044c2077: Gained carrier Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.618 [INFO][4532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0 coredns-674b8bbfcf- kube-system addca0d5-666f-4204-b79f-467069ad43cd 834 0 2026-01-14 01:05:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3 coredns-674b8bbfcf-jgpk9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali114044c2077 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-jgpk9" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.618 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-jgpk9" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.673 [INFO][4543] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" HandleID="k8s-pod-network.bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.673 [INFO][4543] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" HandleID="k8s-pod-network.bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", "pod":"coredns-674b8bbfcf-jgpk9", "timestamp":"2026-01-14 01:06:31.673635633 +0000 UTC"}, Hostname:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.674 [INFO][4543] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.674 [INFO][4543] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.674 [INFO][4543] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.685 [INFO][4543] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.692 [INFO][4543] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.697 [INFO][4543] ipam/ipam.go 511: Trying affinity for 192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.699 [INFO][4543] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.701 [INFO][4543] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.701 [INFO][4543] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.703 [INFO][4543] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.708 [INFO][4543] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.716 [INFO][4543] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.6/26] block=192.168.93.0/26 handle="k8s-pod-network.bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.716 [INFO][4543] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.6/26] handle="k8s-pod-network.bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.717 [INFO][4543] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:06:31.752849 containerd[1606]: 2026-01-14 01:06:31.717 [INFO][4543] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.6/26] IPv6=[] ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" HandleID="k8s-pod-network.bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" Jan 14 01:06:31.753965 containerd[1606]: 2026-01-14 01:06:31.719 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-jgpk9" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"addca0d5-666f-4204-b79f-467069ad43cd", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"", Pod:"coredns-674b8bbfcf-jgpk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali114044c2077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:31.753965 containerd[1606]: 2026-01-14 01:06:31.719 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.6/32] ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-jgpk9" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" Jan 14 01:06:31.753965 containerd[1606]: 2026-01-14 01:06:31.719 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali114044c2077 ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-jgpk9" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" Jan 14 01:06:31.753965 containerd[1606]: 2026-01-14 01:06:31.727 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-jgpk9" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" Jan 14 01:06:31.753965 containerd[1606]: 2026-01-14 01:06:31.727 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-jgpk9" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"addca0d5-666f-4204-b79f-467069ad43cd", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf", Pod:"coredns-674b8bbfcf-jgpk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali114044c2077", MAC:"0e:b8:33:d7:69:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:31.753965 containerd[1606]: 2026-01-14 01:06:31.745 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-jgpk9" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-coredns--674b8bbfcf--jgpk9-eth0" Jan 14 01:06:31.793000 audit[4560]: NETFILTER_CFG table=filter:135 family=2 entries=44 op=nft_register_chain pid=4560 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:31.793000 audit[4560]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffe773a7d30 a2=0 a3=7ffe773a7d1c items=0 ppid=3982 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.793000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:31.803115 containerd[1606]: time="2026-01-14T01:06:31.802436712Z" level=info msg="connecting to shim bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf" address="unix:///run/containerd/s/24cf0a7311a4e55d433f0171c18ac9fd1d27adf55aa4e54ae256ec7e8bacb8cf" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:31.878964 kubelet[2838]: E0114 01:06:31.874041 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:06:31.880490 kubelet[2838]: E0114 01:06:31.875281 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:06:31.880908 kubelet[2838]: E0114 01:06:31.875419 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:06:31.882474 systemd[1]: Started cri-containerd-bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf.scope - libcontainer container bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf. Jan 14 01:06:31.892196 kubelet[2838]: I0114 01:06:31.892110 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4qzsv" podStartSLOduration=45.892045501 podStartE2EDuration="45.892045501s" podCreationTimestamp="2026-01-14 01:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:06:31.891055117 +0000 UTC m=+50.668696949" watchObservedRunningTime="2026-01-14 01:06:31.892045501 +0000 UTC m=+50.669687314" Jan 14 01:06:31.919000 audit: BPF prog-id=239 op=LOAD Jan 14 01:06:31.922000 audit: BPF prog-id=240 op=LOAD Jan 14 01:06:31.922000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353036326638343435663133383036353862373332613538313465 Jan 14 01:06:31.922000 audit: BPF prog-id=240 op=UNLOAD Jan 14 01:06:31.922000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353036326638343435663133383036353862373332613538313465 Jan 14 01:06:31.922000 audit: BPF prog-id=241 op=LOAD Jan 14 01:06:31.922000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353036326638343435663133383036353862373332613538313465 Jan 14 01:06:31.922000 audit: BPF prog-id=242 op=LOAD Jan 14 01:06:31.922000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353036326638343435663133383036353862373332613538313465 Jan 14 01:06:31.922000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:06:31.922000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353036326638343435663133383036353862373332613538313465 Jan 14 01:06:31.922000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:06:31.922000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353036326638343435663133383036353862373332613538313465 Jan 14 01:06:31.922000 audit: BPF prog-id=243 op=LOAD Jan 14 01:06:31.922000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:31.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353036326638343435663133383036353862373332613538313465 Jan 14 01:06:32.021000 audit[4601]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:32.021000 audit[4601]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff9551f060 a2=0 a3=7fff9551f04c items=0 ppid=3020 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:32.027000 audit[4601]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:32.027000 audit[4601]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff9551f060 a2=0 a3=0 items=0 ppid=3020 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:32.047187 containerd[1606]: time="2026-01-14T01:06:32.047021148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jgpk9,Uid:addca0d5-666f-4204-b79f-467069ad43cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf\"" Jan 14 01:06:32.056136 containerd[1606]: time="2026-01-14T01:06:32.055995087Z" level=info msg="CreateContainer within sandbox \"bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:06:32.069062 containerd[1606]: time="2026-01-14T01:06:32.069006351Z" level=info msg="Container e67cb7cffc08b8ce87f382e7dc9bd9fd33227f1fe974e2ebd0b6a31dfed4e3eb: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:06:32.078874 containerd[1606]: time="2026-01-14T01:06:32.078806589Z" level=info msg="CreateContainer within sandbox \"bd5062f8445f1380658b732a5814e5de3ab5f39ef574c10de81f4f3360a984cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e67cb7cffc08b8ce87f382e7dc9bd9fd33227f1fe974e2ebd0b6a31dfed4e3eb\"" Jan 14 01:06:32.080137 containerd[1606]: time="2026-01-14T01:06:32.080058033Z" level=info msg="StartContainer for \"e67cb7cffc08b8ce87f382e7dc9bd9fd33227f1fe974e2ebd0b6a31dfed4e3eb\"" Jan 14 01:06:32.081860 containerd[1606]: time="2026-01-14T01:06:32.081529975Z" level=info msg="connecting to shim e67cb7cffc08b8ce87f382e7dc9bd9fd33227f1fe974e2ebd0b6a31dfed4e3eb" address="unix:///run/containerd/s/24cf0a7311a4e55d433f0171c18ac9fd1d27adf55aa4e54ae256ec7e8bacb8cf" protocol=ttrpc version=3 Jan 14 01:06:32.109402 systemd[1]: Started cri-containerd-e67cb7cffc08b8ce87f382e7dc9bd9fd33227f1fe974e2ebd0b6a31dfed4e3eb.scope - libcontainer container e67cb7cffc08b8ce87f382e7dc9bd9fd33227f1fe974e2ebd0b6a31dfed4e3eb. Jan 14 01:06:32.128000 audit: BPF prog-id=244 op=LOAD Jan 14 01:06:32.130000 audit: BPF prog-id=245 op=LOAD Jan 14 01:06:32.130000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4569 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536376362376366666330386238636538376633383265376463396264 Jan 14 01:06:32.130000 audit: BPF prog-id=245 op=UNLOAD Jan 14 01:06:32.130000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536376362376366666330386238636538376633383265376463396264 Jan 14 01:06:32.130000 audit: BPF prog-id=246 op=LOAD Jan 14 01:06:32.130000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4569 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536376362376366666330386238636538376633383265376463396264 Jan 14 01:06:32.130000 audit: BPF prog-id=247 op=LOAD Jan 14 01:06:32.130000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4569 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536376362376366666330386238636538376633383265376463396264 Jan 14 01:06:32.130000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:06:32.130000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536376362376366666330386238636538376633383265376463396264 Jan 14 01:06:32.130000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:06:32.130000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536376362376366666330386238636538376633383265376463396264 Jan 14 01:06:32.130000 audit: BPF prog-id=248 op=LOAD Jan 14 01:06:32.130000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4569 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536376362376366666330386238636538376633383265376463396264 Jan 14 01:06:32.169933 containerd[1606]: time="2026-01-14T01:06:32.169862971Z" level=info msg="StartContainer for \"e67cb7cffc08b8ce87f382e7dc9bd9fd33227f1fe974e2ebd0b6a31dfed4e3eb\" returns successfully" Jan 14 01:06:32.217400 systemd-networkd[1503]: cali381a4ec8b91: Gained IPv6LL Jan 14 01:06:32.522044 containerd[1606]: time="2026-01-14T01:06:32.521974101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6cw62,Uid:8d5160b5-f0c7-4f7b-963d-652ff95653a3,Namespace:calico-system,Attempt:0,}" Jan 14 01:06:32.522431 containerd[1606]: time="2026-01-14T01:06:32.521974104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599f9958f6-jwfn5,Uid:b6e624a7-3dff-47e8-aa9a-152cfa985108,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:06:32.540440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241534948.mount: Deactivated successfully. Jan 14 01:06:32.731910 systemd-networkd[1503]: caliaa9686f6cf8: Link UP Jan 14 01:06:32.733471 systemd-networkd[1503]: calie2bf21d32c7: Gained IPv6LL Jan 14 01:06:32.736772 systemd-networkd[1503]: caliaa9686f6cf8: Gained carrier Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.618 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0 calico-apiserver-599f9958f6- calico-apiserver b6e624a7-3dff-47e8-aa9a-152cfa985108 837 0 2026-01-14 01:05:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599f9958f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3 calico-apiserver-599f9958f6-jwfn5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa9686f6cf8 [] [] }} ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-jwfn5" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.619 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-jwfn5" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.676 [INFO][4668] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" HandleID="k8s-pod-network.87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.676 [INFO][4668] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" HandleID="k8s-pod-network.87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", "pod":"calico-apiserver-599f9958f6-jwfn5", "timestamp":"2026-01-14 01:06:32.676272176 +0000 UTC"}, Hostname:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.676 [INFO][4668] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.676 [INFO][4668] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.676 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.685 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.691 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.697 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.699 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.701 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.701 [INFO][4668] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.703 [INFO][4668] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11 Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.708 [INFO][4668] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.717 [INFO][4668] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.7/26] block=192.168.93.0/26 handle="k8s-pod-network.87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.718 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.7/26] handle="k8s-pod-network.87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.718 [INFO][4668] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:06:32.769107 containerd[1606]: 2026-01-14 01:06:32.718 [INFO][4668] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.7/26] IPv6=[] ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" HandleID="k8s-pod-network.87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" Jan 14 01:06:32.773160 containerd[1606]: 2026-01-14 01:06:32.721 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-jwfn5" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0", GenerateName:"calico-apiserver-599f9958f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6e624a7-3dff-47e8-aa9a-152cfa985108", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599f9958f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"", Pod:"calico-apiserver-599f9958f6-jwfn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa9686f6cf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:32.773160 containerd[1606]: 2026-01-14 01:06:32.721 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.7/32] ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-jwfn5" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" Jan 14 01:06:32.773160 containerd[1606]: 2026-01-14 01:06:32.721 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa9686f6cf8 ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-jwfn5" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" Jan 14 01:06:32.773160 containerd[1606]: 2026-01-14 01:06:32.741 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-jwfn5" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" Jan 14 01:06:32.773160 containerd[1606]: 2026-01-14 01:06:32.742 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-jwfn5" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0", GenerateName:"calico-apiserver-599f9958f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6e624a7-3dff-47e8-aa9a-152cfa985108", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599f9958f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11", Pod:"calico-apiserver-599f9958f6-jwfn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa9686f6cf8", MAC:"fa:2c:e2:ba:ff:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:32.773160 containerd[1606]: 2026-01-14 01:06:32.765 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" Namespace="calico-apiserver" Pod="calico-apiserver-599f9958f6-jwfn5" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-calico--apiserver--599f9958f6--jwfn5-eth0" Jan 14 01:06:32.794204 systemd-networkd[1503]: cali114044c2077: Gained IPv6LL Jan 14 01:06:32.831006 containerd[1606]: time="2026-01-14T01:06:32.830927177Z" level=info msg="connecting to shim 87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11" address="unix:///run/containerd/s/fd23b6e897c78cab35d099f96419a1653daf59a5f78911a7406620f844723782" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:32.860000 audit[4698]: NETFILTER_CFG table=filter:138 family=2 entries=53 op=nft_register_chain pid=4698 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:32.860000 audit[4698]: SYSCALL arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7ffc3a1414e0 a2=0 a3=7ffc3a1414cc items=0 ppid=3982 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:32.860000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:32.886789 kubelet[2838]: E0114 01:06:32.886709 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:06:32.919209 systemd-networkd[1503]: cali7af4247fe0e: Link UP Jan 14 01:06:32.923364 systemd[1]: Started cri-containerd-87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11.scope - libcontainer container 87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11. Jan 14 01:06:32.932520 systemd-networkd[1503]: cali7af4247fe0e: Gained carrier Jan 14 01:06:32.937539 kubelet[2838]: I0114 01:06:32.937295 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jgpk9" podStartSLOduration=45.937225866 podStartE2EDuration="45.937225866s" podCreationTimestamp="2026-01-14 01:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:06:32.935839127 +0000 UTC m=+51.713480971" watchObservedRunningTime="2026-01-14 01:06:32.937225866 +0000 UTC m=+51.714867689" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.621 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0 csi-node-driver- calico-system 8d5160b5-f0c7-4f7b-963d-652ff95653a3 729 0 2026-01-14 01:06:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3 csi-node-driver-6cw62 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7af4247fe0e [] [] }} ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Namespace="calico-system" Pod="csi-node-driver-6cw62" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.622 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Namespace="calico-system" Pod="csi-node-driver-6cw62" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.676 [INFO][4671] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" HandleID="k8s-pod-network.f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.676 [INFO][4671] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" HandleID="k8s-pod-network.f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", "pod":"csi-node-driver-6cw62", "timestamp":"2026-01-14 01:06:32.676760452 +0000 UTC"}, Hostname:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.677 [INFO][4671] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.718 [INFO][4671] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.718 [INFO][4671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3' Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.787 [INFO][4671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.805 [INFO][4671] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.816 [INFO][4671] ipam/ipam.go 511: Trying affinity for 192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.819 [INFO][4671] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.823 [INFO][4671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.823 [INFO][4671] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.825 [INFO][4671] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.834 [INFO][4671] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.862 [INFO][4671] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.8/26] block=192.168.93.0/26 handle="k8s-pod-network.f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.862 [INFO][4671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.8/26] handle="k8s-pod-network.f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" host="ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3" Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.862 [INFO][4671] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:06:32.966797 containerd[1606]: 2026-01-14 01:06:32.863 [INFO][4671] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.8/26] IPv6=[] ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" HandleID="k8s-pod-network.f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Workload="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" Jan 14 01:06:32.970367 containerd[1606]: 2026-01-14 01:06:32.867 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Namespace="calico-system" Pod="csi-node-driver-6cw62" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d5160b5-f0c7-4f7b-963d-652ff95653a3", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"", Pod:"csi-node-driver-6cw62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7af4247fe0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:32.970367 containerd[1606]: 2026-01-14 01:06:32.868 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.8/32] ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Namespace="calico-system" Pod="csi-node-driver-6cw62" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" Jan 14 01:06:32.970367 containerd[1606]: 2026-01-14 01:06:32.872 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7af4247fe0e ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Namespace="calico-system" Pod="csi-node-driver-6cw62" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" Jan 14 01:06:32.970367 containerd[1606]: 2026-01-14 01:06:32.933 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Namespace="calico-system" Pod="csi-node-driver-6cw62" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" Jan 14 01:06:32.970367 containerd[1606]: 2026-01-14 01:06:32.936 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Namespace="calico-system" Pod="csi-node-driver-6cw62" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d5160b5-f0c7-4f7b-963d-652ff95653a3", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-python-bump-4-62e0eca2f318f0c6b1c3", ContainerID:"f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd", Pod:"csi-node-driver-6cw62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7af4247fe0e", MAC:"4e:58:2d:38:49:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:06:32.970367 containerd[1606]: 2026-01-14 01:06:32.960 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" Namespace="calico-system" Pod="csi-node-driver-6cw62" WorkloadEndpoint="ci--4578--0--0--python--bump--4--62e0eca2f318f0c6b1c3-k8s-csi--node--driver--6cw62-eth0" Jan 14 01:06:33.025507 containerd[1606]: time="2026-01-14T01:06:33.025364005Z" level=info msg="connecting to shim f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd" address="unix:///run/containerd/s/5b049730eb907778ea78753a1c4e8844a20045939a0e740308ac2da42625e5fc" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:06:33.034000 audit[4747]: NETFILTER_CFG table=filter:139 family=2 entries=17 op=nft_register_rule pid=4747 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:33.034000 audit[4747]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4e74f290 a2=0 a3=7ffc4e74f27c items=0 ppid=3020 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:33.041000 audit[4747]: NETFILTER_CFG table=nat:140 family=2 entries=35 op=nft_register_chain pid=4747 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:33.041000 audit[4747]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc4e74f290 a2=0 a3=7ffc4e74f27c items=0 ppid=3020 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:33.078000 audit: BPF prog-id=249 op=LOAD Jan 14 01:06:33.080000 audit: BPF prog-id=250 op=LOAD Jan 14 01:06:33.080000 audit[4709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4696 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837663261666437353965326537346366366266333730326161336134 Jan 14 01:06:33.081000 audit: BPF prog-id=250 op=UNLOAD Jan 14 01:06:33.081000 audit[4709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4696 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837663261666437353965326537346366366266333730326161336134 Jan 14 01:06:33.081000 audit: BPF prog-id=251 op=LOAD Jan 14 01:06:33.081000 audit[4709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4696 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837663261666437353965326537346366366266333730326161336134 Jan 14 01:06:33.081000 audit: BPF prog-id=252 op=LOAD Jan 14 01:06:33.081000 audit[4709]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4696 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837663261666437353965326537346366366266333730326161336134 Jan 14 01:06:33.081000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:06:33.081000 audit[4709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4696 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837663261666437353965326537346366366266333730326161336134 Jan 14 01:06:33.081000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:06:33.081000 audit[4709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4696 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837663261666437353965326537346366366266333730326161336134 Jan 14 01:06:33.081000 audit: BPF prog-id=253 op=LOAD Jan 14 01:06:33.081000 audit[4709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4696 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837663261666437353965326537346366366266333730326161336134 Jan 14 01:06:33.089665 systemd[1]: Started cri-containerd-f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd.scope - libcontainer container f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd. Jan 14 01:06:33.089000 audit[4770]: NETFILTER_CFG table=filter:141 family=2 entries=56 op=nft_register_chain pid=4770 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:06:33.089000 audit[4770]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7ffe0374b6c0 a2=0 a3=7ffe0374b6ac items=0 ppid=3982 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.089000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:06:33.134000 audit: BPF prog-id=254 op=LOAD Jan 14 01:06:33.136000 audit: BPF prog-id=255 op=LOAD Jan 14 01:06:33.136000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4745 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383634346636346234326661653832356537386437346631636639 Jan 14 01:06:33.138000 audit: BPF prog-id=255 op=UNLOAD Jan 14 01:06:33.138000 audit[4759]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4745 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383634346636346234326661653832356537386437346631636639 Jan 14 01:06:33.138000 audit: BPF prog-id=256 op=LOAD Jan 14 01:06:33.138000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4745 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383634346636346234326661653832356537386437346631636639 Jan 14 01:06:33.138000 audit: BPF prog-id=257 op=LOAD Jan 14 01:06:33.138000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4745 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383634346636346234326661653832356537386437346631636639 Jan 14 01:06:33.138000 audit: BPF prog-id=257 op=UNLOAD Jan 14 01:06:33.138000 audit[4759]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4745 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383634346636346234326661653832356537386437346631636639 Jan 14 01:06:33.138000 audit: BPF prog-id=256 op=UNLOAD Jan 14 01:06:33.138000 audit[4759]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4745 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383634346636346234326661653832356537386437346631636639 Jan 14 01:06:33.138000 audit: BPF prog-id=258 op=LOAD Jan 14 01:06:33.138000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4745 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383634346636346234326661653832356537386437346631636639 Jan 14 01:06:33.181101 containerd[1606]: time="2026-01-14T01:06:33.181009049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6cw62,Uid:8d5160b5-f0c7-4f7b-963d-652ff95653a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"f08644f64b42fae825e78d74f1cf90ce60f9e46392990744f8b2b06dd7cbbefd\"" Jan 14 01:06:33.186207 containerd[1606]: time="2026-01-14T01:06:33.185733808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:06:33.221013 containerd[1606]: time="2026-01-14T01:06:33.220859603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599f9958f6-jwfn5,Uid:b6e624a7-3dff-47e8-aa9a-152cfa985108,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"87f2afd759e2e74cf6bf3702aa3a4a51b9ae5e46dd1edbe4ecf04e28b1060d11\"" Jan 14 01:06:33.341156 containerd[1606]: time="2026-01-14T01:06:33.340945596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:33.343316 containerd[1606]: time="2026-01-14T01:06:33.343152630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:06:33.343482 containerd[1606]: time="2026-01-14T01:06:33.343332181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:33.343715 kubelet[2838]: E0114 01:06:33.343657 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:06:33.343861 kubelet[2838]: E0114 01:06:33.343730 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:06:33.344369 containerd[1606]: time="2026-01-14T01:06:33.344305747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:06:33.345154 kubelet[2838]: E0114 01:06:33.344786 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9n8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6cw62_calico-system(8d5160b5-f0c7-4f7b-963d-652ff95653a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:33.512919 containerd[1606]: time="2026-01-14T01:06:33.512858017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:33.514808 containerd[1606]: time="2026-01-14T01:06:33.514745101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:06:33.514808 containerd[1606]: time="2026-01-14T01:06:33.514765885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:33.515228 kubelet[2838]: E0114 01:06:33.515060 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:06:33.515228 kubelet[2838]: E0114 01:06:33.515147 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:06:33.515560 kubelet[2838]: E0114 01:06:33.515487 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfph8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599f9958f6-jwfn5_calico-apiserver(b6e624a7-3dff-47e8-aa9a-152cfa985108): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:33.516755 containerd[1606]: time="2026-01-14T01:06:33.516688416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:06:33.517450 kubelet[2838]: E0114 01:06:33.517297 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:06:33.677790 containerd[1606]: time="2026-01-14T01:06:33.677617849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:33.679486 containerd[1606]: time="2026-01-14T01:06:33.679418618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:06:33.679624 containerd[1606]: time="2026-01-14T01:06:33.679551938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:33.679897 kubelet[2838]: E0114 01:06:33.679838 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:06:33.680007 kubelet[2838]: E0114 01:06:33.679915 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:06:33.680401 kubelet[2838]: E0114 01:06:33.680197 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9n8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6cw62_calico-system(8d5160b5-f0c7-4f7b-963d-652ff95653a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:33.681581 kubelet[2838]: E0114 01:06:33.681497 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:33.879659 kubelet[2838]: E0114 01:06:33.879488 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:33.883295 kubelet[2838]: E0114 01:06:33.883043 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:06:33.931000 audit[4797]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:33.931000 audit[4797]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd235df850 a2=0 a3=7ffd235df83c items=0 ppid=3020 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:33.943000 audit[4797]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=4797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:06:33.943000 audit[4797]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd235df850 a2=0 a3=7ffd235df83c items=0 ppid=3020 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:06:34.328616 systemd-networkd[1503]: caliaa9686f6cf8: Gained IPv6LL Jan 14 01:06:34.330025 systemd-networkd[1503]: cali7af4247fe0e: Gained IPv6LL Jan 14 01:06:34.886534 kubelet[2838]: E0114 01:06:34.886382 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:06:34.889370 kubelet[2838]: E0114 01:06:34.889317 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:37.235988 ntpd[1559]: Listen normally on 7 vxlan.calico 192.168.93.0:123 Jan 14 01:06:37.236122 ntpd[1559]: Listen normally on 8 calicde0e6e82f7 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 7 vxlan.calico 192.168.93.0:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 8 calicde0e6e82f7 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 9 vxlan.calico [fe80::64da:81ff:feec:f904%5]:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 10 cali28b7c97d0fd [fe80::ecee:eeff:feee:eeee%8]:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 11 cali2546e5c6555 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 12 cali381a4ec8b91 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 13 calie2bf21d32c7 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 14 cali114044c2077 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 15 caliaa9686f6cf8 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 14 01:06:37.236672 ntpd[1559]: 14 Jan 01:06:37 ntpd[1559]: Listen normally on 16 cali7af4247fe0e [fe80::ecee:eeff:feee:eeee%14]:123 Jan 14 01:06:37.236174 ntpd[1559]: Listen normally on 9 vxlan.calico [fe80::64da:81ff:feec:f904%5]:123 Jan 14 01:06:37.236216 ntpd[1559]: Listen normally on 10 cali28b7c97d0fd [fe80::ecee:eeff:feee:eeee%8]:123 Jan 14 01:06:37.236261 ntpd[1559]: Listen normally on 11 cali2546e5c6555 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 14 01:06:37.236331 ntpd[1559]: Listen normally on 12 cali381a4ec8b91 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 14 01:06:37.236373 ntpd[1559]: Listen normally on 13 calie2bf21d32c7 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 14 01:06:37.236414 ntpd[1559]: Listen normally on 14 cali114044c2077 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 14 01:06:37.236454 ntpd[1559]: Listen normally on 15 caliaa9686f6cf8 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 14 01:06:37.236492 ntpd[1559]: Listen normally on 16 cali7af4247fe0e [fe80::ecee:eeff:feee:eeee%14]:123 Jan 14 01:06:39.526147 containerd[1606]: time="2026-01-14T01:06:39.525765179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:06:39.686766 containerd[1606]: time="2026-01-14T01:06:39.686701291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:39.688466 containerd[1606]: time="2026-01-14T01:06:39.688386567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:06:39.688628 containerd[1606]: time="2026-01-14T01:06:39.688499187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:39.688795 kubelet[2838]: E0114 01:06:39.688687 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:06:39.689713 kubelet[2838]: E0114 01:06:39.688798 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:06:39.689713 kubelet[2838]: E0114 01:06:39.689048 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5e3e935377b84f50a445073ab6fc7676,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4zs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd9b68968-72tft_calico-system(1e67631e-c93b-4515-bed4-6b4bf787eb57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:39.692285 containerd[1606]: time="2026-01-14T01:06:39.692221485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:06:39.847685 containerd[1606]: time="2026-01-14T01:06:39.847504855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:39.849085 containerd[1606]: time="2026-01-14T01:06:39.849024395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:06:39.849292 containerd[1606]: time="2026-01-14T01:06:39.849220033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:39.849521 kubelet[2838]: E0114 01:06:39.849441 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:06:39.849521 kubelet[2838]: E0114 01:06:39.849512 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:06:39.849799 kubelet[2838]: E0114 01:06:39.849685 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4zs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd9b68968-72tft_calico-system(1e67631e-c93b-4515-bed4-6b4bf787eb57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:39.851091 kubelet[2838]: E0114 01:06:39.851015 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd9b68968-72tft" podUID="1e67631e-c93b-4515-bed4-6b4bf787eb57" Jan 14 01:06:43.526552 containerd[1606]: time="2026-01-14T01:06:43.525761814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:06:43.690328 containerd[1606]: time="2026-01-14T01:06:43.690265101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:43.692378 containerd[1606]: time="2026-01-14T01:06:43.692166413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:06:43.692378 containerd[1606]: time="2026-01-14T01:06:43.692188034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:43.692736 kubelet[2838]: E0114 01:06:43.692587 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:06:43.692736 kubelet[2838]: E0114 01:06:43.692660 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:06:43.693752 kubelet[2838]: E0114 01:06:43.692861 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t49pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5j8fj_calico-system(5c1341b7-068c-4ccd-ba99-cbf173a0144f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:43.694412 kubelet[2838]: E0114 01:06:43.694343 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:06:45.525981 containerd[1606]: time="2026-01-14T01:06:45.525624615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:06:45.693244 containerd[1606]: time="2026-01-14T01:06:45.693034200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:45.695454 containerd[1606]: time="2026-01-14T01:06:45.695355813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:45.696278 containerd[1606]: time="2026-01-14T01:06:45.695361378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:06:45.696630 kubelet[2838]: E0114 01:06:45.696051 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:06:45.696630 kubelet[2838]: E0114 01:06:45.696212 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:06:45.698571 kubelet[2838]: E0114 01:06:45.697808 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5vq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79fc68bf94-hv66g_calico-system(84272add-ebcc-42f7-bffa-247234ecb849): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:45.700486 kubelet[2838]: E0114 01:06:45.700419 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:06:46.522567 containerd[1606]: time="2026-01-14T01:06:46.522161990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:06:46.712694 containerd[1606]: time="2026-01-14T01:06:46.712621728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:46.714776 containerd[1606]: time="2026-01-14T01:06:46.714716148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:06:46.715006 containerd[1606]: time="2026-01-14T01:06:46.714741422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:46.715137 kubelet[2838]: E0114 01:06:46.715052 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:06:46.716153 kubelet[2838]: E0114 01:06:46.715159 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:06:46.716153 kubelet[2838]: E0114 01:06:46.715454 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599f9958f6-q8lcg_calico-apiserver(4615652e-e23f-4213-b02a-57b7f1d1c9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:46.716843 kubelet[2838]: E0114 01:06:46.716766 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:06:47.524022 containerd[1606]: time="2026-01-14T01:06:47.523530289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:06:47.679962 containerd[1606]: time="2026-01-14T01:06:47.679894638Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:47.681865 containerd[1606]: time="2026-01-14T01:06:47.681703526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:06:47.681865 containerd[1606]: time="2026-01-14T01:06:47.681776976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:47.682339 kubelet[2838]: E0114 01:06:47.682260 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:06:47.682339 kubelet[2838]: E0114 01:06:47.682333 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:06:47.682634 kubelet[2838]: E0114 01:06:47.682544 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfph8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599f9958f6-jwfn5_calico-apiserver(b6e624a7-3dff-47e8-aa9a-152cfa985108): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:47.684091 kubelet[2838]: E0114 01:06:47.683921 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:06:48.524134 containerd[1606]: time="2026-01-14T01:06:48.522998584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:06:48.781531 containerd[1606]: time="2026-01-14T01:06:48.781356879Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:48.783114 containerd[1606]: time="2026-01-14T01:06:48.783029419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:06:48.783364 containerd[1606]: time="2026-01-14T01:06:48.783092572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:48.783638 kubelet[2838]: E0114 01:06:48.783582 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:06:48.784207 kubelet[2838]: E0114 01:06:48.783655 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:06:48.784207 kubelet[2838]: E0114 01:06:48.783848 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9n8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6cw62_calico-system(8d5160b5-f0c7-4f7b-963d-652ff95653a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:48.787525 containerd[1606]: time="2026-01-14T01:06:48.787126234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:06:48.954873 containerd[1606]: time="2026-01-14T01:06:48.954601424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:06:48.956495 containerd[1606]: time="2026-01-14T01:06:48.956414264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:06:48.956688 containerd[1606]: time="2026-01-14T01:06:48.956463716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:06:48.956901 kubelet[2838]: E0114 01:06:48.956813 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:06:48.956901 kubelet[2838]: E0114 01:06:48.956887 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:06:48.957780 kubelet[2838]: E0114 01:06:48.957189 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9n8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6cw62_calico-system(8d5160b5-f0c7-4f7b-963d-652ff95653a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:06:48.959115 kubelet[2838]: E0114 01:06:48.958980 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:06:50.523641 kubelet[2838]: E0114 01:06:50.523504 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd9b68968-72tft" podUID="1e67631e-c93b-4515-bed4-6b4bf787eb57" Jan 14 01:06:55.098473 systemd[1]: Started sshd@7-10.128.0.42:22-4.153.228.146:46412.service - OpenSSH per-connection server daemon (4.153.228.146:46412). Jan 14 01:06:55.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.42:22-4.153.228.146:46412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:55.104850 kernel: kauditd_printk_skb: 239 callbacks suppressed Jan 14 01:06:55.105116 kernel: audit: type=1130 audit(1768352815.097:732): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.42:22-4.153.228.146:46412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:55.459000 audit[4827]: USER_ACCT pid=4827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.473065 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:06:55.476991 sshd[4827]: Accepted publickey for core from 4.153.228.146 port 46412 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:06:55.464000 audit[4827]: CRED_ACQ pid=4827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.495806 systemd-logind[1572]: New session 9 of user core. Jan 14 01:06:55.518516 kernel: audit: type=1101 audit(1768352815.459:733): pid=4827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.519275 kernel: audit: type=1103 audit(1768352815.464:734): pid=4827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.519360 kernel: audit: type=1006 audit(1768352815.464:735): pid=4827 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 01:06:55.521197 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:06:55.536106 kubelet[2838]: E0114 01:06:55.535916 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:06:55.464000 audit[4827]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef6ca4240 a2=3 a3=0 items=0 ppid=1 pid=4827 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:55.569497 kernel: audit: type=1300 audit(1768352815.464:735): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef6ca4240 a2=3 a3=0 items=0 ppid=1 pid=4827 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:55.464000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:06:55.583644 kernel: audit: type=1327 audit(1768352815.464:735): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:06:55.536000 audit[4827]: USER_START pid=4827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.620127 kernel: audit: type=1105 audit(1768352815.536:736): pid=4827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.567000 audit[4831]: CRED_ACQ pid=4831 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.647163 kernel: audit: type=1103 audit(1768352815.567:737): pid=4831 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.816097 sshd[4831]: Connection closed by 4.153.228.146 port 46412 Jan 14 01:06:55.818250 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Jan 14 01:06:55.819000 audit[4827]: USER_END pid=4827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.826909 systemd[1]: sshd@7-10.128.0.42:22-4.153.228.146:46412.service: Deactivated successfully. Jan 14 01:06:55.832054 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:06:55.836798 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:06:55.839671 systemd-logind[1572]: Removed session 9. Jan 14 01:06:55.819000 audit[4827]: CRED_DISP pid=4827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.858123 kernel: audit: type=1106 audit(1768352815.819:738): pid=4827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.858204 kernel: audit: type=1104 audit(1768352815.819:739): pid=4827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:06:55.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.42:22-4.153.228.146:46412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:58.522603 kubelet[2838]: E0114 01:06:58.522242 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:06:59.528298 kubelet[2838]: E0114 01:06:59.527721 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:07:00.890305 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:07:00.890467 kernel: audit: type=1130 audit(1768352820.880:741): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.42:22-4.153.228.146:46424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:00.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.42:22-4.153.228.146:46424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:00.881123 systemd[1]: Started sshd@8-10.128.0.42:22-4.153.228.146:46424.service - OpenSSH per-connection server daemon (4.153.228.146:46424). Jan 14 01:07:01.236000 audit[4869]: USER_ACCT pid=4869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.237609 sshd[4869]: Accepted publickey for core from 4.153.228.146 port 46424 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:01.241011 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:01.251137 systemd-logind[1572]: New session 10 of user core. Jan 14 01:07:01.268624 kernel: audit: type=1101 audit(1768352821.236:742): pid=4869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.238000 audit[4869]: CRED_ACQ pid=4869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.270506 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:07:01.312438 kernel: audit: type=1103 audit(1768352821.238:743): pid=4869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.312634 kernel: audit: type=1006 audit(1768352821.238:744): pid=4869 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 01:07:01.313323 kernel: audit: type=1300 audit(1768352821.238:744): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde0316670 a2=3 a3=0 items=0 ppid=1 pid=4869 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:01.238000 audit[4869]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde0316670 a2=3 a3=0 items=0 ppid=1 pid=4869 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:01.238000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:01.353495 kernel: audit: type=1327 audit(1768352821.238:744): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:01.354361 kernel: audit: type=1105 audit(1768352821.283:745): pid=4869 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.283000 audit[4869]: USER_START pid=4869 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.390517 kernel: audit: type=1103 audit(1768352821.287:746): pid=4873 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.287000 audit[4873]: CRED_ACQ pid=4873 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.547016 sshd[4873]: Connection closed by 4.153.228.146 port 46424 Jan 14 01:07:01.547440 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:01.549000 audit[4869]: USER_END pid=4869 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.557695 systemd[1]: sshd@8-10.128.0.42:22-4.153.228.146:46424.service: Deactivated successfully. Jan 14 01:07:01.563416 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:07:01.567380 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:07:01.569435 systemd-logind[1572]: Removed session 10. Jan 14 01:07:01.588189 kernel: audit: type=1106 audit(1768352821.549:747): pid=4869 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.549000 audit[4869]: CRED_DISP pid=4869 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:01.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.42:22-4.153.228.146:46424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:01.616363 kernel: audit: type=1104 audit(1768352821.549:748): pid=4869 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:02.523779 kubelet[2838]: E0114 01:07:02.523616 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:07:02.526013 containerd[1606]: time="2026-01-14T01:07:02.524289322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:07:02.691460 containerd[1606]: time="2026-01-14T01:07:02.691380905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:02.693507 containerd[1606]: time="2026-01-14T01:07:02.693312831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:07:02.693507 containerd[1606]: time="2026-01-14T01:07:02.693377739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:02.693788 kubelet[2838]: E0114 01:07:02.693720 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:07:02.693879 kubelet[2838]: E0114 01:07:02.693792 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:07:02.694025 kubelet[2838]: E0114 01:07:02.693964 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5e3e935377b84f50a445073ab6fc7676,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4zs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd9b68968-72tft_calico-system(1e67631e-c93b-4515-bed4-6b4bf787eb57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:02.696582 containerd[1606]: time="2026-01-14T01:07:02.696546160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:07:02.866102 containerd[1606]: time="2026-01-14T01:07:02.865897672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:02.867799 containerd[1606]: time="2026-01-14T01:07:02.867697627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:07:02.867799 containerd[1606]: time="2026-01-14T01:07:02.867756402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:02.868153 kubelet[2838]: E0114 01:07:02.868099 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:07:02.868289 kubelet[2838]: E0114 01:07:02.868169 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:07:02.868415 kubelet[2838]: E0114 01:07:02.868351 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4zs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd9b68968-72tft_calico-system(1e67631e-c93b-4515-bed4-6b4bf787eb57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:02.870091 kubelet[2838]: E0114 01:07:02.870001 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd9b68968-72tft" podUID="1e67631e-c93b-4515-bed4-6b4bf787eb57" Jan 14 01:07:03.528716 kubelet[2838]: E0114 01:07:03.528491 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:07:06.618670 systemd[1]: Started sshd@9-10.128.0.42:22-4.153.228.146:38872.service - OpenSSH per-connection server daemon (4.153.228.146:38872). Jan 14 01:07:06.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.42:22-4.153.228.146:38872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.625949 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:07:06.626048 kernel: audit: type=1130 audit(1768352826.619:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.42:22-4.153.228.146:38872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.005000 audit[4888]: USER_ACCT pid=4888 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.034584 sshd[4888]: Accepted publickey for core from 4.153.228.146 port 38872 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:07.036000 audit[4888]: CRED_ACQ pid=4888 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.040811 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:07.064652 kernel: audit: type=1101 audit(1768352827.005:751): pid=4888 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.064798 kernel: audit: type=1103 audit(1768352827.036:752): pid=4888 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.036000 audit[4888]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe12c1e000 a2=3 a3=0 items=0 ppid=1 pid=4888 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:07.112621 kernel: audit: type=1006 audit(1768352827.036:753): pid=4888 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:07:07.112785 kernel: audit: type=1300 audit(1768352827.036:753): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe12c1e000 a2=3 a3=0 items=0 ppid=1 pid=4888 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:07.036000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:07.127216 kernel: audit: type=1327 audit(1768352827.036:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:07.124100 systemd-logind[1572]: New session 11 of user core. Jan 14 01:07:07.130408 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:07:07.140000 audit[4888]: USER_START pid=4888 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.179921 kernel: audit: type=1105 audit(1768352827.140:754): pid=4888 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.179000 audit[4892]: CRED_ACQ pid=4892 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.213575 kernel: audit: type=1103 audit(1768352827.179:755): pid=4892 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.465381 sshd[4892]: Connection closed by 4.153.228.146 port 38872 Jan 14 01:07:07.465141 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:07.507028 kernel: audit: type=1106 audit(1768352827.467:756): pid=4888 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.467000 audit[4888]: USER_END pid=4888 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.467000 audit[4888]: CRED_DISP pid=4888 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.535053 containerd[1606]: time="2026-01-14T01:07:07.534594948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:07:07.535663 kernel: audit: type=1104 audit(1768352827.467:757): pid=4888 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.512737 systemd[1]: sshd@9-10.128.0.42:22-4.153.228.146:38872.service: Deactivated successfully. Jan 14 01:07:07.513190 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:07:07.519556 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:07:07.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.42:22-4.153.228.146:38872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.565999 systemd[1]: Started sshd@10-10.128.0.42:22-4.153.228.146:38888.service - OpenSSH per-connection server daemon (4.153.228.146:38888). Jan 14 01:07:07.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.42:22-4.153.228.146:38888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.570788 systemd-logind[1572]: Removed session 11. Jan 14 01:07:07.705705 containerd[1606]: time="2026-01-14T01:07:07.705256637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:07.708547 containerd[1606]: time="2026-01-14T01:07:07.707295349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:07:07.708879 containerd[1606]: time="2026-01-14T01:07:07.708750932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:07.709525 kubelet[2838]: E0114 01:07:07.709136 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:07:07.709525 kubelet[2838]: E0114 01:07:07.709194 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:07:07.709525 kubelet[2838]: E0114 01:07:07.709401 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t49pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5j8fj_calico-system(5c1341b7-068c-4ccd-ba99-cbf173a0144f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:07.710744 kubelet[2838]: E0114 01:07:07.710696 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:07:07.936000 audit[4904]: USER_ACCT pid=4904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.938438 sshd[4904]: Accepted publickey for core from 4.153.228.146 port 38888 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:07.940000 audit[4904]: CRED_ACQ pid=4904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.941000 audit[4904]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfd105ff0 a2=3 a3=0 items=0 ppid=1 pid=4904 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:07.941000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:07.944098 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:07.958842 systemd-logind[1572]: New session 12 of user core. Jan 14 01:07:07.963607 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:07:07.973000 audit[4904]: USER_START pid=4904 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:07.978000 audit[4908]: CRED_ACQ pid=4908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:08.258401 sshd[4908]: Connection closed by 4.153.228.146 port 38888 Jan 14 01:07:08.259418 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:08.261000 audit[4904]: USER_END pid=4904 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:08.261000 audit[4904]: CRED_DISP pid=4904 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:08.267173 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:07:08.267707 systemd[1]: sshd@10-10.128.0.42:22-4.153.228.146:38888.service: Deactivated successfully. Jan 14 01:07:08.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.42:22-4.153.228.146:38888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:08.271931 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:07:08.274810 systemd-logind[1572]: Removed session 12. Jan 14 01:07:08.326013 systemd[1]: Started sshd@11-10.128.0.42:22-4.153.228.146:38900.service - OpenSSH per-connection server daemon (4.153.228.146:38900). Jan 14 01:07:08.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.42:22-4.153.228.146:38900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:08.665000 audit[4924]: USER_ACCT pid=4924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:08.666832 sshd[4924]: Accepted publickey for core from 4.153.228.146 port 38900 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:08.667000 audit[4924]: CRED_ACQ pid=4924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:08.667000 audit[4924]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff5914180 a2=3 a3=0 items=0 ppid=1 pid=4924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:08.667000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:08.669903 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:08.678214 systemd-logind[1572]: New session 13 of user core. Jan 14 01:07:08.691410 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:07:08.695000 audit[4924]: USER_START pid=4924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:08.697000 audit[4928]: CRED_ACQ pid=4928 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:08.915244 sshd[4928]: Connection closed by 4.153.228.146 port 38900 Jan 14 01:07:08.916056 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:08.917000 audit[4924]: USER_END pid=4924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:08.917000 audit[4924]: CRED_DISP pid=4924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:08.924861 systemd[1]: sshd@11-10.128.0.42:22-4.153.228.146:38900.service: Deactivated successfully. Jan 14 01:07:08.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.42:22-4.153.228.146:38900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:08.928156 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:07:08.930809 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:07:08.932594 systemd-logind[1572]: Removed session 13. Jan 14 01:07:13.525787 containerd[1606]: time="2026-01-14T01:07:13.525647767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:07:13.681712 containerd[1606]: time="2026-01-14T01:07:13.681634155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:13.683819 containerd[1606]: time="2026-01-14T01:07:13.683645258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:07:13.683819 containerd[1606]: time="2026-01-14T01:07:13.683735092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:13.684673 kubelet[2838]: E0114 01:07:13.684292 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:07:13.684673 kubelet[2838]: E0114 01:07:13.684359 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:07:13.684673 kubelet[2838]: E0114 01:07:13.684579 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5vq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79fc68bf94-hv66g_calico-system(84272add-ebcc-42f7-bffa-247234ecb849): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:13.686458 kubelet[2838]: E0114 01:07:13.686184 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:07:13.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.42:22-4.153.228.146:38914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:13.992155 systemd[1]: Started sshd@12-10.128.0.42:22-4.153.228.146:38914.service - OpenSSH per-connection server daemon (4.153.228.146:38914). Jan 14 01:07:14.008062 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:07:14.008244 kernel: audit: type=1130 audit(1768352833.991:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.42:22-4.153.228.146:38914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:14.361373 sshd[4942]: Accepted publickey for core from 4.153.228.146 port 38914 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:14.358000 audit[4942]: USER_ACCT pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.393106 kernel: audit: type=1101 audit(1768352834.358:778): pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.396695 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:14.393000 audit[4942]: CRED_ACQ pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.426572 kernel: audit: type=1103 audit(1768352834.393:779): pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.437786 systemd-logind[1572]: New session 14 of user core. Jan 14 01:07:14.451105 kernel: audit: type=1006 audit(1768352834.393:780): pid=4942 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 01:07:14.393000 audit[4942]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff08319c30 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:14.455392 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:07:14.487131 kernel: audit: type=1300 audit(1768352834.393:780): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff08319c30 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:14.393000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:14.505123 kernel: audit: type=1327 audit(1768352834.393:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:14.490000 audit[4942]: USER_START pid=4942 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.526652 containerd[1606]: time="2026-01-14T01:07:14.526280343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:07:14.530353 kubelet[2838]: E0114 01:07:14.530283 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd9b68968-72tft" podUID="1e67631e-c93b-4515-bed4-6b4bf787eb57" Jan 14 01:07:14.543101 kernel: audit: type=1105 audit(1768352834.490:781): pid=4942 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.506000 audit[4950]: CRED_ACQ pid=4950 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.572713 kernel: audit: type=1103 audit(1768352834.506:782): pid=4950 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.716485 containerd[1606]: time="2026-01-14T01:07:14.716417656Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:14.718217 containerd[1606]: time="2026-01-14T01:07:14.718158882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:07:14.718374 containerd[1606]: time="2026-01-14T01:07:14.718183259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:14.718586 kubelet[2838]: E0114 01:07:14.718516 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:07:14.719116 kubelet[2838]: E0114 01:07:14.718586 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:07:14.719116 kubelet[2838]: E0114 01:07:14.718786 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599f9958f6-q8lcg_calico-apiserver(4615652e-e23f-4213-b02a-57b7f1d1c9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:14.720623 kubelet[2838]: E0114 01:07:14.720516 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:07:14.748368 sshd[4950]: Connection closed by 4.153.228.146 port 38914 Jan 14 01:07:14.749457 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:14.750000 audit[4942]: USER_END pid=4942 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.788164 kernel: audit: type=1106 audit(1768352834.750:783): pid=4942 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.750000 audit[4942]: CRED_DISP pid=4942 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:14.792659 systemd[1]: sshd@12-10.128.0.42:22-4.153.228.146:38914.service: Deactivated successfully. Jan 14 01:07:14.796891 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:07:14.799173 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:07:14.805065 systemd-logind[1572]: Removed session 14. Jan 14 01:07:14.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.42:22-4.153.228.146:38914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:14.814112 kernel: audit: type=1104 audit(1768352834.750:784): pid=4942 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:15.524678 containerd[1606]: time="2026-01-14T01:07:15.524623892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:07:15.683622 containerd[1606]: time="2026-01-14T01:07:15.683560709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:15.686192 containerd[1606]: time="2026-01-14T01:07:15.686046138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:07:15.686686 containerd[1606]: time="2026-01-14T01:07:15.686102365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:15.687091 kubelet[2838]: E0114 01:07:15.686945 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:07:15.687317 kubelet[2838]: E0114 01:07:15.687215 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:07:15.687980 kubelet[2838]: E0114 01:07:15.687899 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfph8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599f9958f6-jwfn5_calico-apiserver(b6e624a7-3dff-47e8-aa9a-152cfa985108): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:15.689559 kubelet[2838]: E0114 01:07:15.689410 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:07:17.523669 containerd[1606]: time="2026-01-14T01:07:17.523318624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:07:17.699220 containerd[1606]: time="2026-01-14T01:07:17.699122764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:17.701134 containerd[1606]: time="2026-01-14T01:07:17.701052485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:07:17.701410 containerd[1606]: time="2026-01-14T01:07:17.701101185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:17.701665 kubelet[2838]: E0114 01:07:17.701620 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:07:17.702562 kubelet[2838]: E0114 01:07:17.701686 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:07:17.702562 kubelet[2838]: E0114 01:07:17.702434 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9n8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6cw62_calico-system(8d5160b5-f0c7-4f7b-963d-652ff95653a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:17.706278 containerd[1606]: time="2026-01-14T01:07:17.706232621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:07:17.863732 containerd[1606]: time="2026-01-14T01:07:17.863555443Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:17.865486 containerd[1606]: time="2026-01-14T01:07:17.865336732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:07:17.865486 containerd[1606]: time="2026-01-14T01:07:17.865399626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:17.865807 kubelet[2838]: E0114 01:07:17.865698 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:07:17.865961 kubelet[2838]: E0114 01:07:17.865821 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:07:17.866074 kubelet[2838]: E0114 01:07:17.866012 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9n8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6cw62_calico-system(8d5160b5-f0c7-4f7b-963d-652ff95653a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:17.867744 kubelet[2838]: E0114 01:07:17.867652 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:07:19.818674 systemd[1]: Started sshd@13-10.128.0.42:22-4.153.228.146:48958.service - OpenSSH per-connection server daemon (4.153.228.146:48958). Jan 14 01:07:19.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.42:22-4.153.228.146:48958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:19.824818 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:07:19.824957 kernel: audit: type=1130 audit(1768352839.818:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.42:22-4.153.228.146:48958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:20.182000 audit[4963]: USER_ACCT pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.184570 sshd[4963]: Accepted publickey for core from 4.153.228.146 port 48958 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:20.188761 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:20.205963 systemd-logind[1572]: New session 15 of user core. Jan 14 01:07:20.182000 audit[4963]: CRED_ACQ pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.240909 kernel: audit: type=1101 audit(1768352840.182:787): pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.242050 kernel: audit: type=1103 audit(1768352840.182:788): pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.242133 kernel: audit: type=1006 audit(1768352840.182:789): pid=4963 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 01:07:20.241499 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:07:20.182000 audit[4963]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2d6ed3a0 a2=3 a3=0 items=0 ppid=1 pid=4963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:20.287236 kernel: audit: type=1300 audit(1768352840.182:789): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2d6ed3a0 a2=3 a3=0 items=0 ppid=1 pid=4963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:20.287441 kernel: audit: type=1327 audit(1768352840.182:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:20.182000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:20.298104 kernel: audit: type=1105 audit(1768352840.250:790): pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.250000 audit[4963]: USER_START pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.250000 audit[4967]: CRED_ACQ pid=4967 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.359268 kernel: audit: type=1103 audit(1768352840.250:791): pid=4967 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.487597 sshd[4967]: Connection closed by 4.153.228.146 port 48958 Jan 14 01:07:20.489292 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:20.490000 audit[4963]: USER_END pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.496606 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:07:20.497519 systemd[1]: sshd@13-10.128.0.42:22-4.153.228.146:48958.service: Deactivated successfully. Jan 14 01:07:20.502493 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:07:20.508037 systemd-logind[1572]: Removed session 15. Jan 14 01:07:20.533125 kernel: audit: type=1106 audit(1768352840.490:792): pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.533295 kernel: audit: type=1104 audit(1768352840.490:793): pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.490000 audit[4963]: CRED_DISP pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:20.533458 kubelet[2838]: E0114 01:07:20.529463 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:07:20.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.42:22-4.153.228.146:48958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:25.553353 systemd[1]: Started sshd@14-10.128.0.42:22-4.153.228.146:53214.service - OpenSSH per-connection server daemon (4.153.228.146:53214). Jan 14 01:07:25.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.42:22-4.153.228.146:53214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:25.560557 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:07:25.560663 kernel: audit: type=1130 audit(1768352845.553:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.42:22-4.153.228.146:53214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:25.904000 audit[4979]: USER_ACCT pid=4979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:25.906186 sshd[4979]: Accepted publickey for core from 4.153.228.146 port 53214 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:25.909926 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:25.920846 systemd-logind[1572]: New session 16 of user core. Jan 14 01:07:25.904000 audit[4979]: CRED_ACQ pid=4979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:25.963038 kernel: audit: type=1101 audit(1768352845.904:796): pid=4979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:25.963219 kernel: audit: type=1103 audit(1768352845.904:797): pid=4979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:25.963266 kernel: audit: type=1006 audit(1768352845.904:798): pid=4979 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:07:25.904000 audit[4979]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfcd050f0 a2=3 a3=0 items=0 ppid=1 pid=4979 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:25.980492 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:07:26.009191 kernel: audit: type=1300 audit(1768352845.904:798): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfcd050f0 a2=3 a3=0 items=0 ppid=1 pid=4979 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:25.904000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:26.020346 kernel: audit: type=1327 audit(1768352845.904:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:26.020476 kernel: audit: type=1105 audit(1768352845.989:799): pid=4979 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:25.989000 audit[4979]: USER_START pid=4979 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:25.989000 audit[4983]: CRED_ACQ pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:26.088157 kernel: audit: type=1103 audit(1768352845.989:800): pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:26.210456 sshd[4983]: Connection closed by 4.153.228.146 port 53214 Jan 14 01:07:26.210830 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:26.213000 audit[4979]: USER_END pid=4979 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:26.220322 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:07:26.221199 systemd[1]: sshd@14-10.128.0.42:22-4.153.228.146:53214.service: Deactivated successfully. Jan 14 01:07:26.227709 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:07:26.233221 systemd-logind[1572]: Removed session 16. Jan 14 01:07:26.250111 kernel: audit: type=1106 audit(1768352846.213:801): pid=4979 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:26.250258 kernel: audit: type=1104 audit(1768352846.213:802): pid=4979 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:26.213000 audit[4979]: CRED_DISP pid=4979 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:26.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.42:22-4.153.228.146:53214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:26.523142 kubelet[2838]: E0114 01:07:26.522677 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:07:27.523452 kubelet[2838]: E0114 01:07:27.523334 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:07:29.525645 kubelet[2838]: E0114 01:07:29.525530 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:07:29.529225 kubelet[2838]: E0114 01:07:29.529141 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd9b68968-72tft" podUID="1e67631e-c93b-4515-bed4-6b4bf787eb57" Jan 14 01:07:30.526504 kubelet[2838]: E0114 01:07:30.526418 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:07:31.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.42:22-4.153.228.146:53222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:31.275561 systemd[1]: Started sshd@15-10.128.0.42:22-4.153.228.146:53222.service - OpenSSH per-connection server daemon (4.153.228.146:53222). Jan 14 01:07:31.281491 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:07:31.281640 kernel: audit: type=1130 audit(1768352851.274:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.42:22-4.153.228.146:53222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:31.631000 audit[5019]: USER_ACCT pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.633267 sshd[5019]: Accepted publickey for core from 4.153.228.146 port 53222 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:31.637102 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:31.648777 systemd-logind[1572]: New session 17 of user core. Jan 14 01:07:31.664677 kernel: audit: type=1101 audit(1768352851.631:805): pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.634000 audit[5019]: CRED_ACQ pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.692121 kernel: audit: type=1103 audit(1768352851.634:806): pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.709104 kernel: audit: type=1006 audit(1768352851.634:807): pid=5019 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:07:31.709946 kernel: audit: type=1300 audit(1768352851.634:807): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7b5edd80 a2=3 a3=0 items=0 ppid=1 pid=5019 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:31.634000 audit[5019]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7b5edd80 a2=3 a3=0 items=0 ppid=1 pid=5019 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:31.739750 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:07:31.634000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:31.746000 audit[5019]: USER_START pid=5019 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.793863 kernel: audit: type=1327 audit(1768352851.634:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:31.793990 kernel: audit: type=1105 audit(1768352851.746:808): pid=5019 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.794096 kernel: audit: type=1103 audit(1768352851.751:809): pid=5023 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.751000 audit[5023]: CRED_ACQ pid=5023 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.984691 sshd[5023]: Connection closed by 4.153.228.146 port 53222 Jan 14 01:07:31.985724 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:31.987000 audit[5019]: USER_END pid=5019 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.993798 systemd[1]: sshd@15-10.128.0.42:22-4.153.228.146:53222.service: Deactivated successfully. Jan 14 01:07:31.997706 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:07:32.005703 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:07:32.007416 systemd-logind[1572]: Removed session 17. Jan 14 01:07:31.987000 audit[5019]: CRED_DISP pid=5019 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:32.050125 kernel: audit: type=1106 audit(1768352851.987:810): pid=5019 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:32.050333 kernel: audit: type=1104 audit(1768352851.987:811): pid=5019 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:31.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.42:22-4.153.228.146:53222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:32.063056 systemd[1]: Started sshd@16-10.128.0.42:22-4.153.228.146:53234.service - OpenSSH per-connection server daemon (4.153.228.146:53234). Jan 14 01:07:32.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.42:22-4.153.228.146:53234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:32.400000 audit[5035]: USER_ACCT pid=5035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:32.402274 sshd[5035]: Accepted publickey for core from 4.153.228.146 port 53234 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:32.402000 audit[5035]: CRED_ACQ pid=5035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:32.402000 audit[5035]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5695a710 a2=3 a3=0 items=0 ppid=1 pid=5035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:32.402000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:32.404586 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:32.412278 systemd-logind[1572]: New session 18 of user core. Jan 14 01:07:32.420433 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:07:32.424000 audit[5035]: USER_START pid=5035 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:32.427000 audit[5039]: CRED_ACQ pid=5039 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:32.708867 sshd[5039]: Connection closed by 4.153.228.146 port 53234 Jan 14 01:07:32.710428 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:32.712000 audit[5035]: USER_END pid=5035 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:32.712000 audit[5035]: CRED_DISP pid=5035 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:32.717042 systemd[1]: sshd@16-10.128.0.42:22-4.153.228.146:53234.service: Deactivated successfully. Jan 14 01:07:32.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.42:22-4.153.228.146:53234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:32.721107 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:07:32.723928 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:07:32.725521 systemd-logind[1572]: Removed session 18. Jan 14 01:07:32.785953 systemd[1]: Started sshd@17-10.128.0.42:22-4.153.228.146:53240.service - OpenSSH per-connection server daemon (4.153.228.146:53240). Jan 14 01:07:32.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.42:22-4.153.228.146:53240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:33.158000 audit[5048]: USER_ACCT pid=5048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:33.160205 sshd[5048]: Accepted publickey for core from 4.153.228.146 port 53240 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:33.161000 audit[5048]: CRED_ACQ pid=5048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:33.161000 audit[5048]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff04f93060 a2=3 a3=0 items=0 ppid=1 pid=5048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:33.161000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:33.163792 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:33.172147 systemd-logind[1572]: New session 19 of user core. Jan 14 01:07:33.177474 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:07:33.181000 audit[5048]: USER_START pid=5048 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:33.184000 audit[5052]: CRED_ACQ pid=5052 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:33.525118 kubelet[2838]: E0114 01:07:33.523665 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:07:34.055198 sshd[5052]: Connection closed by 4.153.228.146 port 53240 Jan 14 01:07:34.053627 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:34.055000 audit[5048]: USER_END pid=5048 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:34.055000 audit[5048]: CRED_DISP pid=5048 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:34.065278 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:07:34.066814 systemd[1]: sshd@17-10.128.0.42:22-4.153.228.146:53240.service: Deactivated successfully. Jan 14 01:07:34.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.42:22-4.153.228.146:53240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:34.073896 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:07:34.078911 systemd-logind[1572]: Removed session 19. Jan 14 01:07:34.126563 systemd[1]: Started sshd@18-10.128.0.42:22-4.153.228.146:53256.service - OpenSSH per-connection server daemon (4.153.228.146:53256). Jan 14 01:07:34.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.42:22-4.153.228.146:53256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:34.167000 audit[5066]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:07:34.167000 audit[5066]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd96982a90 a2=0 a3=7ffd96982a7c items=0 ppid=3020 pid=5066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:34.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:07:34.174000 audit[5066]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:07:34.174000 audit[5066]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd96982a90 a2=0 a3=0 items=0 ppid=3020 pid=5066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:34.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:07:34.194000 audit[5070]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:07:34.194000 audit[5070]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff71bf6820 a2=0 a3=7fff71bf680c items=0 ppid=3020 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:34.194000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:07:34.200000 audit[5070]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:07:34.200000 audit[5070]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff71bf6820 a2=0 a3=0 items=0 ppid=3020 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:34.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:07:34.498000 audit[5065]: USER_ACCT pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:34.500359 sshd[5065]: Accepted publickey for core from 4.153.228.146 port 53256 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:34.501000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:34.501000 audit[5065]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcde1e1e60 a2=3 a3=0 items=0 ppid=1 pid=5065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:34.501000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:34.504051 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:34.511759 systemd-logind[1572]: New session 20 of user core. Jan 14 01:07:34.520401 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:07:34.524000 audit[5065]: USER_START pid=5065 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:34.527000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:34.896441 sshd[5072]: Connection closed by 4.153.228.146 port 53256 Jan 14 01:07:34.897942 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:34.899000 audit[5065]: USER_END pid=5065 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:34.899000 audit[5065]: CRED_DISP pid=5065 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:34.905317 systemd[1]: sshd@18-10.128.0.42:22-4.153.228.146:53256.service: Deactivated successfully. Jan 14 01:07:34.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.42:22-4.153.228.146:53256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:34.907990 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:07:34.909648 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:07:34.912055 systemd-logind[1572]: Removed session 20. Jan 14 01:07:34.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.42:22-4.153.228.146:33740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:34.968291 systemd[1]: Started sshd@19-10.128.0.42:22-4.153.228.146:33740.service - OpenSSH per-connection server daemon (4.153.228.146:33740). Jan 14 01:07:35.313000 audit[5082]: USER_ACCT pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:35.314534 sshd[5082]: Accepted publickey for core from 4.153.228.146 port 33740 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:35.315000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:35.315000 audit[5082]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee39c9b20 a2=3 a3=0 items=0 ppid=1 pid=5082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:35.315000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:35.317937 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:35.329742 systemd-logind[1572]: New session 21 of user core. Jan 14 01:07:35.341400 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:07:35.345000 audit[5082]: USER_START pid=5082 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:35.348000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:35.565650 sshd[5086]: Connection closed by 4.153.228.146 port 33740 Jan 14 01:07:35.566911 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:35.568000 audit[5082]: USER_END pid=5082 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:35.568000 audit[5082]: CRED_DISP pid=5082 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:35.576670 systemd[1]: sshd@19-10.128.0.42:22-4.153.228.146:33740.service: Deactivated successfully. Jan 14 01:07:35.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.42:22-4.153.228.146:33740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:35.582539 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:07:35.585311 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:07:35.590553 systemd-logind[1572]: Removed session 21. Jan 14 01:07:40.522576 kubelet[2838]: E0114 01:07:40.522501 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:07:40.524020 kubelet[2838]: E0114 01:07:40.523629 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:07:40.632693 systemd[1]: Started sshd@20-10.128.0.42:22-4.153.228.146:33744.service - OpenSSH per-connection server daemon (4.153.228.146:33744). Jan 14 01:07:40.641621 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 01:07:40.641789 kernel: audit: type=1130 audit(1768352860.632:853): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.42:22-4.153.228.146:33744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:40.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.42:22-4.153.228.146:33744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:40.995866 sshd[5098]: Accepted publickey for core from 4.153.228.146 port 33744 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:40.994000 audit[5098]: USER_ACCT pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:41.001705 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:41.028753 kernel: audit: type=1101 audit(1768352860.994:854): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:41.029206 kernel: audit: type=1103 audit(1768352860.997:855): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:40.997000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:41.036535 systemd-logind[1572]: New session 22 of user core. Jan 14 01:07:41.067155 kernel: audit: type=1006 audit(1768352860.998:856): pid=5098 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:07:40.998000 audit[5098]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe337385b0 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:41.102134 kernel: audit: type=1300 audit(1768352860.998:856): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe337385b0 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:41.102604 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:07:40.998000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:41.116159 kernel: audit: type=1327 audit(1768352860.998:856): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:41.070000 audit[5103]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:07:41.070000 audit[5103]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc6018290 a2=0 a3=7ffdc601827c items=0 ppid=3020 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:41.134126 kernel: audit: type=1325 audit(1768352861.070:857): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:07:41.134201 kernel: audit: type=1300 audit(1768352861.070:857): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc6018290 a2=0 a3=7ffdc601827c items=0 ppid=3020 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:41.171137 kernel: audit: type=1327 audit(1768352861.070:857): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:07:41.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:07:41.110000 audit[5098]: USER_START pid=5098 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:41.183108 kernel: audit: type=1105 audit(1768352861.110:858): pid=5098 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:41.114000 audit[5104]: CRED_ACQ pid=5104 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:41.115000 audit[5103]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:07:41.115000 audit[5103]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdc6018290 a2=0 a3=7ffdc601827c items=0 ppid=3020 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:41.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:07:41.360170 sshd[5104]: Connection closed by 4.153.228.146 port 33744 Jan 14 01:07:41.362292 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:41.363000 audit[5098]: USER_END pid=5098 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:41.364000 audit[5098]: CRED_DISP pid=5098 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:41.368586 systemd[1]: sshd@20-10.128.0.42:22-4.153.228.146:33744.service: Deactivated successfully. Jan 14 01:07:41.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.42:22-4.153.228.146:33744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:41.371848 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:07:41.373900 systemd-logind[1572]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:07:41.376772 systemd-logind[1572]: Removed session 22. Jan 14 01:07:41.527100 kubelet[2838]: E0114 01:07:41.526206 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3" Jan 14 01:07:41.527100 kubelet[2838]: E0114 01:07:41.526366 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:07:44.522486 containerd[1606]: time="2026-01-14T01:07:44.522426664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:07:44.685772 containerd[1606]: time="2026-01-14T01:07:44.685683434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:44.688334 containerd[1606]: time="2026-01-14T01:07:44.688257138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:07:44.688750 containerd[1606]: time="2026-01-14T01:07:44.688400297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:44.688883 kubelet[2838]: E0114 01:07:44.688602 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:07:44.688883 kubelet[2838]: E0114 01:07:44.688674 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:07:44.689740 kubelet[2838]: E0114 01:07:44.688846 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5e3e935377b84f50a445073ab6fc7676,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4zs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd9b68968-72tft_calico-system(1e67631e-c93b-4515-bed4-6b4bf787eb57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:44.692155 containerd[1606]: time="2026-01-14T01:07:44.692064681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:07:44.866253 containerd[1606]: time="2026-01-14T01:07:44.866049663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:07:44.868059 containerd[1606]: time="2026-01-14T01:07:44.867988899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:07:44.868059 containerd[1606]: time="2026-01-14T01:07:44.868007467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:07:44.868414 kubelet[2838]: E0114 01:07:44.868329 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:07:44.868414 kubelet[2838]: E0114 01:07:44.868394 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:07:44.868647 kubelet[2838]: E0114 01:07:44.868556 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4zs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd9b68968-72tft_calico-system(1e67631e-c93b-4515-bed4-6b4bf787eb57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:07:44.870218 kubelet[2838]: E0114 01:07:44.870131 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd9b68968-72tft" podUID="1e67631e-c93b-4515-bed4-6b4bf787eb57" Jan 14 01:07:45.523537 kubelet[2838]: E0114 01:07:45.523442 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5j8fj" podUID="5c1341b7-068c-4ccd-ba99-cbf173a0144f" Jan 14 01:07:46.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.42:22-4.153.228.146:54788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:46.426744 systemd[1]: Started sshd@21-10.128.0.42:22-4.153.228.146:54788.service - OpenSSH per-connection server daemon (4.153.228.146:54788). Jan 14 01:07:46.432750 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:07:46.432863 kernel: audit: type=1130 audit(1768352866.425:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.42:22-4.153.228.146:54788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:46.781000 audit[5118]: USER_ACCT pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:46.782837 sshd[5118]: Accepted publickey for core from 4.153.228.146 port 54788 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:46.786616 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:46.798400 systemd-logind[1572]: New session 23 of user core. Jan 14 01:07:46.813388 kernel: audit: type=1101 audit(1768352866.781:865): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:46.813551 kernel: audit: type=1103 audit(1768352866.783:866): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:46.783000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:46.814522 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:07:46.784000 audit[5118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe53fb2b30 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:46.886281 kernel: audit: type=1006 audit(1768352866.784:867): pid=5118 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:07:46.886466 kernel: audit: type=1300 audit(1768352866.784:867): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe53fb2b30 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:46.886514 kernel: audit: type=1327 audit(1768352866.784:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:46.784000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:46.823000 audit[5118]: USER_START pid=5118 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:46.932744 kernel: audit: type=1105 audit(1768352866.823:868): pid=5118 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:46.828000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:46.958442 kernel: audit: type=1103 audit(1768352866.828:869): pid=5122 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:47.068549 sshd[5122]: Connection closed by 4.153.228.146 port 54788 Jan 14 01:07:47.069397 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:47.071000 audit[5118]: USER_END pid=5118 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:47.079390 systemd[1]: sshd@21-10.128.0.42:22-4.153.228.146:54788.service: Deactivated successfully. Jan 14 01:07:47.084427 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:07:47.088539 systemd-logind[1572]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:07:47.091908 systemd-logind[1572]: Removed session 23. Jan 14 01:07:47.071000 audit[5118]: CRED_DISP pid=5118 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:47.135061 kernel: audit: type=1106 audit(1768352867.071:870): pid=5118 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:47.135262 kernel: audit: type=1104 audit(1768352867.071:871): pid=5118 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:47.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.42:22-4.153.228.146:54788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:52.164971 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:07:52.165324 kernel: audit: type=1130 audit(1768352872.133:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.42:22-4.153.228.146:54790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:52.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.42:22-4.153.228.146:54790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:52.134304 systemd[1]: Started sshd@22-10.128.0.42:22-4.153.228.146:54790.service - OpenSSH per-connection server daemon (4.153.228.146:54790). Jan 14 01:07:52.529100 kubelet[2838]: E0114 01:07:52.529016 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-jwfn5" podUID="b6e624a7-3dff-47e8-aa9a-152cfa985108" Jan 14 01:07:52.529766 kubelet[2838]: E0114 01:07:52.529414 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599f9958f6-q8lcg" podUID="4615652e-e23f-4213-b02a-57b7f1d1c9f0" Jan 14 01:07:52.541000 audit[5144]: USER_ACCT pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.549014 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:52.551412 sshd[5144]: Accepted publickey for core from 4.153.228.146 port 54790 ssh2: RSA SHA256:Ok70x9gfuccGsouOPVZvf4R47i+14mVBEJ6KaTyi8hU Jan 14 01:07:52.573114 kernel: audit: type=1101 audit(1768352872.541:874): pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.583427 systemd-logind[1572]: New session 24 of user core. Jan 14 01:07:52.541000 audit[5144]: CRED_ACQ pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.614119 kernel: audit: type=1103 audit(1768352872.541:875): pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.615449 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:07:52.633376 kernel: audit: type=1006 audit(1768352872.541:876): pid=5144 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 01:07:52.541000 audit[5144]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc21902730 a2=3 a3=0 items=0 ppid=1 pid=5144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:52.541000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:52.677119 kernel: audit: type=1300 audit(1768352872.541:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc21902730 a2=3 a3=0 items=0 ppid=1 pid=5144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:52.677274 kernel: audit: type=1327 audit(1768352872.541:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:52.677338 kernel: audit: type=1105 audit(1768352872.633:877): pid=5144 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.633000 audit[5144]: USER_START pid=5144 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.740890 kernel: audit: type=1103 audit(1768352872.639:878): pid=5148 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.639000 audit[5148]: CRED_ACQ pid=5148 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.908902 sshd[5148]: Connection closed by 4.153.228.146 port 54790 Jan 14 01:07:52.910372 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:52.913000 audit[5144]: USER_END pid=5144 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.951100 kernel: audit: type=1106 audit(1768352872.913:879): pid=5144 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.913000 audit[5144]: CRED_DISP pid=5144 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.977124 kernel: audit: type=1104 audit(1768352872.913:880): pid=5144 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:07:52.977432 systemd[1]: sshd@22-10.128.0.42:22-4.153.228.146:54790.service: Deactivated successfully. Jan 14 01:07:52.980503 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:07:52.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.42:22-4.153.228.146:54790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:52.986045 systemd-logind[1572]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:07:52.987767 systemd-logind[1572]: Removed session 24. Jan 14 01:07:53.527599 kubelet[2838]: E0114 01:07:53.525968 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79fc68bf94-hv66g" podUID="84272add-ebcc-42f7-bffa-247234ecb849" Jan 14 01:07:53.531846 kubelet[2838]: E0114 01:07:53.531792 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6cw62" podUID="8d5160b5-f0c7-4f7b-963d-652ff95653a3"