Jan 16 21:17:40.499300 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 18:44:02 -00 2026 Jan 16 21:17:40.499326 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:17:40.499337 kernel: BIOS-provided physical RAM map: Jan 16 21:17:40.499344 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 16 21:17:40.499351 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 16 21:17:40.499357 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 16 21:17:40.499365 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 16 21:17:40.499372 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 16 21:17:40.499379 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 16 21:17:40.499386 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 16 21:17:40.499391 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 16 21:17:40.499397 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 16 21:17:40.499403 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 16 21:17:40.499409 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 16 21:17:40.499417 kernel: NX (Execute Disable) protection: active Jan 16 21:17:40.499425 kernel: APIC: Static calls initialized Jan 16 21:17:40.499432 kernel: efi: EFI v2.7 by Microsoft Jan 16 21:17:40.499439 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa4018 RNG=0x3ffd2018 Jan 16 21:17:40.499446 kernel: random: crng init done Jan 16 21:17:40.499453 kernel: secureboot: Secure boot disabled Jan 16 21:17:40.499459 kernel: SMBIOS 3.1.0 present. Jan 16 21:17:40.499466 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 16 21:17:40.499473 kernel: DMI: Memory slots populated: 2/2 Jan 16 21:17:40.499480 kernel: Hypervisor detected: Microsoft Hyper-V Jan 16 21:17:40.499487 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 16 21:17:40.499495 kernel: Hyper-V: Nested features: 0x3e0101 Jan 16 21:17:40.499502 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 16 21:17:40.499509 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 16 21:17:40.499516 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 16 21:17:40.499523 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 16 21:17:40.499530 kernel: tsc: Detected 2299.998 MHz processor Jan 16 21:17:40.499537 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 21:17:40.499545 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 21:17:40.499553 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 16 21:17:40.499562 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 16 21:17:40.499570 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 21:17:40.499577 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 16 21:17:40.499584 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 16 21:17:40.499592 kernel: Using GB pages for direct mapping Jan 16 21:17:40.499599 kernel: ACPI: Early table checksum verification disabled Jan 16 21:17:40.499611 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 16 21:17:40.499618 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 16 21:17:40.499627 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 16 21:17:40.499634 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 16 21:17:40.499642 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 16 21:17:40.499650 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 16 21:17:40.499659 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 16 21:17:40.499666 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 16 21:17:40.499673 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 16 21:17:40.499680 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 16 21:17:40.499687 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 16 21:17:40.499695 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 16 21:17:40.499704 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 16 21:17:40.499713 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 16 21:17:40.499721 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 16 21:17:40.499729 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 16 21:17:40.499737 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 16 21:17:40.499745 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 16 21:17:40.499753 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 16 21:17:40.499762 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 16 21:17:40.499769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 16 21:17:40.499777 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 16 21:17:40.499785 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 16 21:17:40.499793 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 16 21:17:40.499801 kernel: Zone ranges: Jan 16 21:17:40.499808 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 21:17:40.499818 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 16 21:17:40.499825 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 16 21:17:40.500046 kernel: Device empty Jan 16 21:17:40.500055 kernel: Movable zone start for each node Jan 16 21:17:40.500063 kernel: Early memory node ranges Jan 16 21:17:40.500071 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 16 21:17:40.500078 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 16 21:17:40.500088 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 16 21:17:40.500096 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 16 21:17:40.500109 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 16 21:17:40.500116 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 16 21:17:40.500124 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 21:17:40.500132 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 16 21:17:40.500140 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 16 21:17:40.500148 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 16 21:17:40.500157 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 16 21:17:40.500165 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 16 21:17:40.500172 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 21:17:40.500180 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 21:17:40.500188 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 21:17:40.500196 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 16 21:17:40.500204 kernel: TSC deadline timer available Jan 16 21:17:40.500213 kernel: CPU topo: Max. logical packages: 1 Jan 16 21:17:40.500221 kernel: CPU topo: Max. logical dies: 1 Jan 16 21:17:40.500228 kernel: CPU topo: Max. dies per package: 1 Jan 16 21:17:40.500236 kernel: CPU topo: Max. threads per core: 2 Jan 16 21:17:40.500243 kernel: CPU topo: Num. cores per package: 1 Jan 16 21:17:40.500251 kernel: CPU topo: Num. threads per package: 2 Jan 16 21:17:40.500259 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 16 21:17:40.500268 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 16 21:17:40.500276 kernel: Booting paravirtualized kernel on Hyper-V Jan 16 21:17:40.500284 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 21:17:40.500292 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 21:17:40.500300 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 16 21:17:40.500308 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 16 21:17:40.500315 kernel: pcpu-alloc: [0] 0 1 Jan 16 21:17:40.500325 kernel: Hyper-V: PV spinlocks enabled Jan 16 21:17:40.500332 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 16 21:17:40.500341 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:17:40.500349 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 16 21:17:40.500357 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 21:17:40.500366 kernel: Fallback order for Node 0: 0 Jan 16 21:17:40.500375 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 16 21:17:40.500384 kernel: Policy zone: Normal Jan 16 21:17:40.500392 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 21:17:40.500399 kernel: software IO TLB: area num 2. Jan 16 21:17:40.500407 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 21:17:40.500414 kernel: ftrace: allocating 40128 entries in 157 pages Jan 16 21:17:40.500422 kernel: ftrace: allocated 157 pages with 5 groups Jan 16 21:17:40.500429 kernel: Dynamic Preempt: voluntary Jan 16 21:17:40.500438 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 21:17:40.500447 kernel: rcu: RCU event tracing is enabled. Jan 16 21:17:40.500461 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 21:17:40.500470 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 21:17:40.500478 kernel: Rude variant of Tasks RCU enabled. Jan 16 21:17:40.500487 kernel: Tracing variant of Tasks RCU enabled. Jan 16 21:17:40.500495 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 21:17:40.500503 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 21:17:40.500511 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 21:17:40.500522 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 21:17:40.500530 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 21:17:40.500538 kernel: Using NULL legacy PIC Jan 16 21:17:40.500547 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 16 21:17:40.500556 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 21:17:40.500564 kernel: Console: colour dummy device 80x25 Jan 16 21:17:40.500573 kernel: printk: legacy console [tty1] enabled Jan 16 21:17:40.500581 kernel: printk: legacy console [ttyS0] enabled Jan 16 21:17:40.500589 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 16 21:17:40.500598 kernel: ACPI: Core revision 20240827 Jan 16 21:17:40.500607 kernel: Failed to register legacy timer interrupt Jan 16 21:17:40.500617 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 21:17:40.500625 kernel: x2apic enabled Jan 16 21:17:40.500633 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 21:17:40.500640 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 16 21:17:40.500648 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 16 21:17:40.500656 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 16 21:17:40.500664 kernel: Hyper-V: Using IPI hypercalls Jan 16 21:17:40.500673 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 16 21:17:40.500681 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 16 21:17:40.500689 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 16 21:17:40.500697 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 16 21:17:40.500705 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 16 21:17:40.500713 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 16 21:17:40.500721 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 16 21:17:40.500732 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299998) Jan 16 21:17:40.500741 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 16 21:17:40.500750 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 21:17:40.500759 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 21:17:40.500767 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 21:17:40.500775 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 21:17:40.500783 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 16 21:17:40.500792 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 16 21:17:40.500803 kernel: RETBleed: Vulnerable Jan 16 21:17:40.500810 kernel: Speculative Store Bypass: Vulnerable Jan 16 21:17:40.500819 kernel: active return thunk: its_return_thunk Jan 16 21:17:40.500827 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 16 21:17:40.500856 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 21:17:40.500863 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 21:17:40.500871 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 21:17:40.500878 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 16 21:17:40.500885 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 16 21:17:40.500892 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 16 21:17:40.500900 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 16 21:17:40.500907 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 16 21:17:40.500915 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 16 21:17:40.500922 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 21:17:40.500930 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 16 21:17:40.500937 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 16 21:17:40.500944 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 16 21:17:40.500952 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 16 21:17:40.500959 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 16 21:17:40.500966 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 16 21:17:40.500974 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 16 21:17:40.500983 kernel: Freeing SMP alternatives memory: 32K Jan 16 21:17:40.500991 kernel: pid_max: default: 32768 minimum: 301 Jan 16 21:17:40.500999 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 16 21:17:40.501007 kernel: landlock: Up and running. Jan 16 21:17:40.501015 kernel: SELinux: Initializing. Jan 16 21:17:40.501023 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 21:17:40.501030 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 21:17:40.501038 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 16 21:17:40.501046 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 16 21:17:40.501054 kernel: signal: max sigframe size: 11952 Jan 16 21:17:40.501064 kernel: rcu: Hierarchical SRCU implementation. Jan 16 21:17:40.501073 kernel: rcu: Max phase no-delay instances is 400. Jan 16 21:17:40.501081 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 16 21:17:40.501091 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 21:17:40.501099 kernel: smp: Bringing up secondary CPUs ... Jan 16 21:17:40.501107 kernel: smpboot: x86: Booting SMP configuration: Jan 16 21:17:40.501115 kernel: .... node #0, CPUs: #1 Jan 16 21:17:40.501122 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 21:17:40.501133 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 16 21:17:40.501142 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 283604K reserved, 0K cma-reserved) Jan 16 21:17:40.501151 kernel: devtmpfs: initialized Jan 16 21:17:40.501159 kernel: x86/mm: Memory block size: 128MB Jan 16 21:17:40.501167 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 16 21:17:40.501175 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 21:17:40.501183 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 21:17:40.501193 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 21:17:40.501202 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 21:17:40.501210 kernel: audit: initializing netlink subsys (disabled) Jan 16 21:17:40.501218 kernel: audit: type=2000 audit(1768598257.084:1): state=initialized audit_enabled=0 res=1 Jan 16 21:17:40.501227 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 21:17:40.501235 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 21:17:40.501243 kernel: cpuidle: using governor menu Jan 16 21:17:40.501252 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 21:17:40.501260 kernel: dca service started, version 1.12.1 Jan 16 21:17:40.501269 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 16 21:17:40.501278 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 16 21:17:40.501286 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 21:17:40.501294 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 21:17:40.501302 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 21:17:40.501312 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 21:17:40.501320 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 21:17:40.501328 kernel: ACPI: Added _OSI(Module Device) Jan 16 21:17:40.501337 kernel: ACPI: Added _OSI(Processor Device) Jan 16 21:17:40.501345 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 21:17:40.501353 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 21:17:40.501361 kernel: ACPI: Interpreter enabled Jan 16 21:17:40.501371 kernel: ACPI: PM: (supports S0 S5) Jan 16 21:17:40.501379 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 21:17:40.501387 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 21:17:40.501395 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 16 21:17:40.501403 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 16 21:17:40.501411 kernel: iommu: Default domain type: Translated Jan 16 21:17:40.501420 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 21:17:40.501429 kernel: efivars: Registered efivars operations Jan 16 21:17:40.501437 kernel: PCI: Using ACPI for IRQ routing Jan 16 21:17:40.501445 kernel: PCI: System does not support PCI Jan 16 21:17:40.501453 kernel: vgaarb: loaded Jan 16 21:17:40.501461 kernel: clocksource: Switched to clocksource tsc-early Jan 16 21:17:40.501470 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 21:17:40.501478 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 21:17:40.501488 kernel: pnp: PnP ACPI init Jan 16 21:17:40.501496 kernel: pnp: PnP ACPI: found 3 devices Jan 16 21:17:40.501504 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 21:17:40.501512 kernel: NET: Registered PF_INET protocol family Jan 16 21:17:40.501520 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 21:17:40.501528 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 16 21:17:40.501537 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 21:17:40.501547 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 21:17:40.501555 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 16 21:17:40.501563 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 16 21:17:40.501572 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 16 21:17:40.501580 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 16 21:17:40.501588 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 21:17:40.501596 kernel: NET: Registered PF_XDP protocol family Jan 16 21:17:40.501606 kernel: PCI: CLS 0 bytes, default 64 Jan 16 21:17:40.501614 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 16 21:17:40.501623 kernel: software IO TLB: mapped [mem 0x000000003a9af000-0x000000003e9af000] (64MB) Jan 16 21:17:40.501631 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 16 21:17:40.501639 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 16 21:17:40.501647 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 16 21:17:40.501655 kernel: clocksource: Switched to clocksource tsc Jan 16 21:17:40.501664 kernel: Initialise system trusted keyrings Jan 16 21:17:40.501673 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 16 21:17:40.501681 kernel: Key type asymmetric registered Jan 16 21:17:40.501690 kernel: Asymmetric key parser 'x509' registered Jan 16 21:17:40.501697 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 21:17:40.501706 kernel: io scheduler mq-deadline registered Jan 16 21:17:40.501714 kernel: io scheduler kyber registered Jan 16 21:17:40.501723 kernel: io scheduler bfq registered Jan 16 21:17:40.501731 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 21:17:40.501740 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 21:17:40.501748 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 21:17:40.501757 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 16 21:17:40.501765 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 21:17:40.501773 kernel: i8042: PNP: No PS/2 controller found. Jan 16 21:17:40.501925 kernel: rtc_cmos 00:02: registered as rtc0 Jan 16 21:17:40.502018 kernel: rtc_cmos 00:02: setting system clock to 2026-01-16T21:17:38 UTC (1768598258) Jan 16 21:17:40.502104 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 16 21:17:40.502114 kernel: intel_pstate: Intel P-state driver initializing Jan 16 21:17:40.502122 kernel: efifb: probing for efifb Jan 16 21:17:40.502130 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 16 21:17:40.502141 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 16 21:17:40.502150 kernel: efifb: scrolling: redraw Jan 16 21:17:40.502158 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 16 21:17:40.502166 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 21:17:40.502174 kernel: fb0: EFI VGA frame buffer device Jan 16 21:17:40.502182 kernel: pstore: Using crash dump compression: deflate Jan 16 21:17:40.502190 kernel: pstore: Registered efi_pstore as persistent store backend Jan 16 21:17:40.502198 kernel: NET: Registered PF_INET6 protocol family Jan 16 21:17:40.502208 kernel: Segment Routing with IPv6 Jan 16 21:17:40.502217 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 21:17:40.502225 kernel: NET: Registered PF_PACKET protocol family Jan 16 21:17:40.502234 kernel: Key type dns_resolver registered Jan 16 21:17:40.502242 kernel: IPI shorthand broadcast: enabled Jan 16 21:17:40.502250 kernel: sched_clock: Marking stable (1741266051, 85524230)->(2100022132, -273231851) Jan 16 21:17:40.502258 kernel: registered taskstats version 1 Jan 16 21:17:40.502268 kernel: Loading compiled-in X.509 certificates Jan 16 21:17:40.502276 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a9591db9912320a48a0589d0293fff3e535b90df' Jan 16 21:17:40.502285 kernel: Demotion targets for Node 0: null Jan 16 21:17:40.502294 kernel: Key type .fscrypt registered Jan 16 21:17:40.502302 kernel: Key type fscrypt-provisioning registered Jan 16 21:17:40.502310 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 21:17:40.502318 kernel: ima: Allocated hash algorithm: sha1 Jan 16 21:17:40.502327 kernel: ima: No architecture policies found Jan 16 21:17:40.502335 kernel: clk: Disabling unused clocks Jan 16 21:17:40.502344 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 16 21:17:40.502352 kernel: Write protecting the kernel read-only data: 47104k Jan 16 21:17:40.502361 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 16 21:17:40.502369 kernel: Run /init as init process Jan 16 21:17:40.502377 kernel: with arguments: Jan 16 21:17:40.502386 kernel: /init Jan 16 21:17:40.502394 kernel: with environment: Jan 16 21:17:40.502402 kernel: HOME=/ Jan 16 21:17:40.502410 kernel: TERM=linux Jan 16 21:17:40.502418 kernel: hv_vmbus: Vmbus version:5.3 Jan 16 21:17:40.502427 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 16 21:17:40.502435 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 16 21:17:40.502443 kernel: PTP clock support registered Jan 16 21:17:40.502453 kernel: hv_utils: Registering HyperV Utility Driver Jan 16 21:17:40.502460 kernel: SCSI subsystem initialized Jan 16 21:17:40.502469 kernel: hv_vmbus: registering driver hv_utils Jan 16 21:17:40.502477 kernel: hv_utils: Shutdown IC version 3.2 Jan 16 21:17:40.502485 kernel: hv_utils: Heartbeat IC version 3.0 Jan 16 21:17:40.502494 kernel: hv_utils: TimeSync IC version 4.0 Jan 16 21:17:40.502502 kernel: hv_vmbus: registering driver hv_pci Jan 16 21:17:40.502618 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 16 21:17:40.502715 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 16 21:17:40.502821 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 16 21:17:40.503204 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 16 21:17:40.504905 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 16 21:17:40.505080 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 16 21:17:40.505188 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 16 21:17:40.505295 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 16 21:17:40.505305 kernel: hv_vmbus: registering driver hv_storvsc Jan 16 21:17:40.505415 kernel: scsi host0: storvsc_host_t Jan 16 21:17:40.505566 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 16 21:17:40.505579 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 21:17:40.505589 kernel: hv_vmbus: registering driver hid_hyperv Jan 16 21:17:40.505599 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 16 21:17:40.505728 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 16 21:17:40.505742 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 16 21:17:40.505755 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 16 21:17:40.505883 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 16 21:17:40.505996 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 16 21:17:40.506078 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 16 21:17:40.506090 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 21:17:40.506199 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 16 21:17:40.506211 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 16 21:17:40.506315 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 16 21:17:40.506326 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 21:17:40.506336 kernel: device-mapper: uevent: version 1.0.3 Jan 16 21:17:40.506345 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 16 21:17:40.506354 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 16 21:17:40.506375 kernel: raid6: avx512x4 gen() 47308 MB/s Jan 16 21:17:40.506385 kernel: raid6: avx512x2 gen() 47276 MB/s Jan 16 21:17:40.506394 kernel: raid6: avx512x1 gen() 30208 MB/s Jan 16 21:17:40.506403 kernel: raid6: avx2x4 gen() 43579 MB/s Jan 16 21:17:40.506410 kernel: raid6: avx2x2 gen() 44351 MB/s Jan 16 21:17:40.506419 kernel: raid6: avx2x1 gen() 32784 MB/s Jan 16 21:17:40.506428 kernel: raid6: using algorithm avx512x4 gen() 47308 MB/s Jan 16 21:17:40.506438 kernel: raid6: .... xor() 8187 MB/s, rmw enabled Jan 16 21:17:40.506446 kernel: raid6: using avx512x2 recovery algorithm Jan 16 21:17:40.506455 kernel: xor: automatically using best checksumming function avx Jan 16 21:17:40.506464 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 21:17:40.506473 kernel: BTRFS: device fsid a5f82c06-1ff1-43b3-a650-214802f1359b devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (965) Jan 16 21:17:40.506482 kernel: BTRFS info (device dm-0): first mount of filesystem a5f82c06-1ff1-43b3-a650-214802f1359b Jan 16 21:17:40.506491 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:17:40.506500 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 21:17:40.506509 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 21:17:40.506518 kernel: BTRFS info (device dm-0): enabling free space tree Jan 16 21:17:40.506527 kernel: loop: module loaded Jan 16 21:17:40.506535 kernel: loop0: detected capacity change from 0 to 100536 Jan 16 21:17:40.506544 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 21:17:40.506553 systemd[1]: Successfully made /usr/ read-only. Jan 16 21:17:40.506568 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 21:17:40.506577 systemd[1]: Detected virtualization microsoft. Jan 16 21:17:40.506586 systemd[1]: Detected architecture x86-64. Jan 16 21:17:40.506595 systemd[1]: Running in initrd. Jan 16 21:17:40.506604 systemd[1]: No hostname configured, using default hostname. Jan 16 21:17:40.506613 systemd[1]: Hostname set to . Jan 16 21:17:40.506624 systemd[1]: Initializing machine ID from random generator. Jan 16 21:17:40.506633 systemd[1]: Queued start job for default target initrd.target. Jan 16 21:17:40.506642 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 21:17:40.506651 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:17:40.506662 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:17:40.506672 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 21:17:40.506683 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 21:17:40.506693 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 21:17:40.506702 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 21:17:40.506712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:17:40.506722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:17:40.506731 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 16 21:17:40.506740 systemd[1]: Reached target paths.target - Path Units. Jan 16 21:17:40.506749 systemd[1]: Reached target slices.target - Slice Units. Jan 16 21:17:40.506758 systemd[1]: Reached target swap.target - Swaps. Jan 16 21:17:40.506767 systemd[1]: Reached target timers.target - Timer Units. Jan 16 21:17:40.506777 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 21:17:40.506788 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 21:17:40.506797 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:17:40.506806 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 21:17:40.506815 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 16 21:17:40.506824 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:17:40.507912 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 21:17:40.507931 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:17:40.507941 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 21:17:40.507951 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 21:17:40.507960 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 21:17:40.507969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 21:17:40.507977 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 21:17:40.507987 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 16 21:17:40.507998 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 21:17:40.508008 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 21:17:40.508017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 21:17:40.508026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:17:40.508061 systemd-journald[1102]: Collecting audit messages is enabled. Jan 16 21:17:40.508082 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 21:17:40.508091 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:17:40.508102 kernel: audit: type=1130 audit(1768598260.502:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.508113 systemd-journald[1102]: Journal started Jan 16 21:17:40.508133 systemd-journald[1102]: Runtime Journal (/run/log/journal/586b3a377e6e47a597ca4a8ffbc85691) is 8M, max 158.5M, 150.5M free. Jan 16 21:17:40.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.513133 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 21:17:40.515700 kernel: audit: type=1130 audit(1768598260.510:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.523003 kernel: audit: type=1130 audit(1768598260.516:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.523131 kernel: audit: type=1130 audit(1768598260.520:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.517929 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 21:17:40.526850 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 21:17:40.537331 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 21:17:40.562054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:17:40.580933 kernel: audit: type=1130 audit(1768598260.564:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.580953 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 21:17:40.580970 kernel: audit: type=1130 audit(1768598260.571:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.563932 systemd-tmpfiles[1114]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 16 21:17:40.569407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 21:17:40.573999 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 21:17:40.582269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 21:17:40.584464 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:17:40.588854 kernel: audit: type=1130 audit(1768598260.584:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.598184 systemd-modules-load[1105]: Inserted module 'br_netfilter' Jan 16 21:17:40.599956 kernel: Bridge firewalling registered Jan 16 21:17:40.599258 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 21:17:40.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.606041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 21:17:40.609714 kernel: audit: type=1130 audit(1768598260.601:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.612282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:17:40.620199 kernel: audit: type=1130 audit(1768598260.613:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.623000 audit: BPF prog-id=6 op=LOAD Jan 16 21:17:40.623826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:17:40.625623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 21:17:40.633791 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 21:17:40.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.639390 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 21:17:40.661780 dracut-cmdline[1143]: dracut-109 Jan 16 21:17:40.666523 dracut-cmdline[1143]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:17:40.670328 systemd-resolved[1137]: Positive Trust Anchors: Jan 16 21:17:40.670335 systemd-resolved[1137]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 21:17:40.670338 systemd-resolved[1137]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 21:17:40.670369 systemd-resolved[1137]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 21:17:40.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.696865 systemd-resolved[1137]: Defaulting to hostname 'linux'. Jan 16 21:17:40.697472 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 21:17:40.700493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:17:40.767852 kernel: Loading iSCSI transport class v2.0-870. Jan 16 21:17:40.787852 kernel: iscsi: registered transport (tcp) Jan 16 21:17:40.812347 kernel: iscsi: registered transport (qla4xxx) Jan 16 21:17:40.812383 kernel: QLogic iSCSI HBA Driver Jan 16 21:17:40.834217 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 21:17:40.851757 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:17:40.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.858127 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 21:17:40.885405 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 21:17:40.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.888416 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 21:17:40.891852 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 21:17:40.920703 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 21:17:40.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.924000 audit: BPF prog-id=7 op=LOAD Jan 16 21:17:40.924000 audit: BPF prog-id=8 op=LOAD Jan 16 21:17:40.925635 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:17:40.953016 systemd-udevd[1384]: Using default interface naming scheme 'v257'. Jan 16 21:17:40.962413 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:17:40.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.968618 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 21:17:40.980997 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 21:17:40.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:40.985000 audit: BPF prog-id=9 op=LOAD Jan 16 21:17:40.986981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 21:17:40.992953 dracut-pre-trigger[1463]: rd.md=0: removing MD RAID activation Jan 16 21:17:41.015599 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 21:17:41.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:41.022895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 21:17:41.035008 systemd-networkd[1478]: lo: Link UP Jan 16 21:17:41.035013 systemd-networkd[1478]: lo: Gained carrier Jan 16 21:17:41.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:41.035406 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 21:17:41.039932 systemd[1]: Reached target network.target - Network. Jan 16 21:17:41.065722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:17:41.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:41.070940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 21:17:41.122863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 16 21:17:41.148084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:17:41.151662 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 21:17:41.150656 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:17:41.159213 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:17:41.164714 kernel: hv_vmbus: registering driver hv_netvsc Jan 16 21:17:41.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:41.167711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:17:41.182844 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d725fe7 (unnamed net_device) (uninitialized): VF slot 1 added Jan 16 21:17:41.190292 kernel: AES CTR mode by8 optimization enabled Jan 16 21:17:41.190335 kernel: nvme nvme0: using unchecked data buffer Jan 16 21:17:41.192005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:17:41.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:41.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:41.192075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:17:41.209215 systemd-networkd[1478]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:17:41.209221 systemd-networkd[1478]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 21:17:41.213066 systemd-networkd[1478]: eth0: Link UP Jan 16 21:17:41.213186 systemd-networkd[1478]: eth0: Gained carrier Jan 16 21:17:41.213197 systemd-networkd[1478]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:17:41.218010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:17:41.231878 systemd-networkd[1478]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 16 21:17:41.268274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:17:41.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:41.281066 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 16 21:17:41.288958 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 16 21:17:41.298064 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 16 21:17:41.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:41.298729 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 21:17:41.313146 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 16 21:17:41.314998 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 21:17:41.315882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:17:41.315900 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 21:17:41.321492 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 21:17:41.333242 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 21:17:41.367631 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 21:17:41.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:42.204039 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 16 21:17:42.204214 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 16 21:17:42.206526 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 16 21:17:42.208042 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 16 21:17:42.211926 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 16 21:17:42.215944 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 16 21:17:42.219957 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 16 21:17:42.221895 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 16 21:17:42.233119 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 16 21:17:42.233294 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 16 21:17:42.237105 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 16 21:17:42.243720 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 16 21:17:42.252857 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 16 21:17:42.256038 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d725fe7 eth0: VF registering: eth1 Jan 16 21:17:42.256197 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 16 21:17:42.259758 systemd-networkd[1478]: eth1: Interface name change detected, renamed to enP30832s1. Jan 16 21:17:42.262327 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 16 21:17:42.358859 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 16 21:17:42.362937 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 16 21:17:42.363134 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d725fe7 eth0: Data path switched to VF: enP30832s1 Jan 16 21:17:42.363517 systemd-networkd[1478]: enP30832s1: Link UP Jan 16 21:17:42.364489 systemd-networkd[1478]: enP30832s1: Gained carrier Jan 16 21:17:42.421699 disk-uuid[1663]: Warning: The kernel is still using the old partition table. Jan 16 21:17:42.421699 disk-uuid[1663]: The new table will be used at the next reboot or after you Jan 16 21:17:42.421699 disk-uuid[1663]: run partprobe(8) or kpartx(8) Jan 16 21:17:42.421699 disk-uuid[1663]: The operation has completed successfully. Jan 16 21:17:42.430409 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 21:17:42.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:42.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:42.430475 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 21:17:42.434028 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 21:17:42.456847 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1697) Jan 16 21:17:42.459197 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:17:42.459227 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:17:42.475465 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 21:17:42.475510 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 16 21:17:42.476971 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 16 21:17:42.483101 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:17:42.483049 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 21:17:42.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:42.485359 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 21:17:42.725631 ignition[1716]: Ignition 2.24.0 Jan 16 21:17:42.725641 ignition[1716]: Stage: fetch-offline Jan 16 21:17:42.727239 ignition[1716]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:17:42.727254 ignition[1716]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 16 21:17:42.728488 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 21:17:42.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:42.727335 ignition[1716]: parsed url from cmdline: "" Jan 16 21:17:42.731618 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 21:17:42.727338 ignition[1716]: no config URL provided Jan 16 21:17:42.727342 ignition[1716]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 21:17:42.727349 ignition[1716]: no config at "/usr/lib/ignition/user.ign" Jan 16 21:17:42.727357 ignition[1716]: failed to fetch config: resource requires networking Jan 16 21:17:42.727636 ignition[1716]: Ignition finished successfully Jan 16 21:17:42.752436 ignition[1723]: Ignition 2.24.0 Jan 16 21:17:42.752446 ignition[1723]: Stage: fetch Jan 16 21:17:42.752611 ignition[1723]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:17:42.752618 ignition[1723]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 16 21:17:42.752671 ignition[1723]: parsed url from cmdline: "" Jan 16 21:17:42.752673 ignition[1723]: no config URL provided Jan 16 21:17:42.752677 ignition[1723]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 21:17:42.752681 ignition[1723]: no config at "/usr/lib/ignition/user.ign" Jan 16 21:17:42.752699 ignition[1723]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 16 21:17:42.922566 ignition[1723]: GET result: OK Jan 16 21:17:42.922620 ignition[1723]: config has been read from IMDS userdata Jan 16 21:17:42.922646 ignition[1723]: parsing config with SHA512: 4901386ab34e4d622cb1e1153db94c0f1d26b6c2d8062683a5ff1f5a5dc22056ea9ca789768dd7f05effadb93659dd52d9907199ad2d0009d3ffbae7ffedd426 Jan 16 21:17:42.928090 unknown[1723]: fetched base config from "system" Jan 16 21:17:42.928796 unknown[1723]: fetched base config from "system" Jan 16 21:17:42.937808 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 16 21:17:42.937856 kernel: audit: type=1130 audit(1768598262.935:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:42.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:42.929086 ignition[1723]: fetch: fetch complete Jan 16 21:17:42.928801 unknown[1723]: fetched user config from "azure" Jan 16 21:17:42.929089 ignition[1723]: fetch: fetch passed Jan 16 21:17:42.930722 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 21:17:42.929119 ignition[1723]: Ignition finished successfully Jan 16 21:17:42.940392 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 21:17:42.959213 ignition[1730]: Ignition 2.24.0 Jan 16 21:17:42.959222 ignition[1730]: Stage: kargs Jan 16 21:17:42.959440 ignition[1730]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:17:42.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:42.961819 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 21:17:42.971515 kernel: audit: type=1130 audit(1768598262.963:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:42.959446 ignition[1730]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 16 21:17:42.968937 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 21:17:42.960239 ignition[1730]: kargs: kargs passed Jan 16 21:17:42.960267 ignition[1730]: Ignition finished successfully Jan 16 21:17:42.991591 ignition[1736]: Ignition 2.24.0 Jan 16 21:17:42.991600 ignition[1736]: Stage: disks Jan 16 21:17:42.991791 ignition[1736]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:17:42.991797 ignition[1736]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 16 21:17:42.994968 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 21:17:42.992580 ignition[1736]: disks: disks passed Jan 16 21:17:42.992608 ignition[1736]: Ignition finished successfully Jan 16 21:17:42.997472 systemd-networkd[1478]: eth0: Gained IPv6LL Jan 16 21:17:43.015209 kernel: audit: type=1130 audit(1768598263.001:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.002527 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 21:17:43.006385 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 21:17:43.006598 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 21:17:43.006620 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 21:17:43.006638 systemd[1]: Reached target basic.target - Basic System. Jan 16 21:17:43.007474 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 21:17:43.047096 systemd-fsck[1744]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 16 21:17:43.050370 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 21:17:43.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.056939 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 21:17:43.059015 kernel: audit: type=1130 audit(1768598263.052:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.184845 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ec5ae8d3-548b-4a34-bd68-b1a953fcffb6 r/w with ordered data mode. Quota mode: none. Jan 16 21:17:43.185310 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 21:17:43.187039 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 21:17:43.200196 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 21:17:43.203992 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 21:17:43.214763 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 21:17:43.219733 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 21:17:43.223899 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1753) Jan 16 21:17:43.219769 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 21:17:43.223426 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 21:17:43.225942 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 21:17:43.238948 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:17:43.238982 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:17:43.241958 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 21:17:43.242008 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 16 21:17:43.242944 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 16 21:17:43.244176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 21:17:43.368094 coreos-metadata[1755]: Jan 16 21:17:43.368 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 16 21:17:43.370525 coreos-metadata[1755]: Jan 16 21:17:43.370 INFO Fetch successful Jan 16 21:17:43.371805 coreos-metadata[1755]: Jan 16 21:17:43.370 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 16 21:17:43.376405 coreos-metadata[1755]: Jan 16 21:17:43.376 INFO Fetch successful Jan 16 21:17:43.380752 coreos-metadata[1755]: Jan 16 21:17:43.380 INFO wrote hostname ci-4580.0.0-p-452f1e7704 to /sysroot/etc/hostname Jan 16 21:17:43.381589 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 21:17:43.392881 kernel: audit: type=1130 audit(1768598263.385:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.678509 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 21:17:43.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.684920 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 21:17:43.688677 kernel: audit: type=1130 audit(1768598263.680:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.688767 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 21:17:43.704330 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 21:17:43.707855 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:17:43.717937 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 21:17:43.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.724896 kernel: audit: type=1130 audit(1768598263.719:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.729173 ignition[1857]: INFO : Ignition 2.24.0 Jan 16 21:17:43.729173 ignition[1857]: INFO : Stage: mount Jan 16 21:17:43.732636 ignition[1857]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:17:43.732636 ignition[1857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 16 21:17:43.732636 ignition[1857]: INFO : mount: mount passed Jan 16 21:17:43.732636 ignition[1857]: INFO : Ignition finished successfully Jan 16 21:17:43.743712 kernel: audit: type=1130 audit(1768598263.733:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:43.730911 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 21:17:43.735653 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 21:17:44.186385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 21:17:44.214851 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1867) Jan 16 21:17:44.218072 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:17:44.218099 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:17:44.223051 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 21:17:44.223083 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 16 21:17:44.223205 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 16 21:17:44.225527 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 21:17:44.248712 ignition[1884]: INFO : Ignition 2.24.0 Jan 16 21:17:44.248712 ignition[1884]: INFO : Stage: files Jan 16 21:17:44.250821 ignition[1884]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:17:44.250821 ignition[1884]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 16 21:17:44.250821 ignition[1884]: DEBUG : files: compiled without relabeling support, skipping Jan 16 21:17:44.256309 ignition[1884]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 21:17:44.256309 ignition[1884]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 21:17:44.271045 ignition[1884]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 21:17:44.274411 ignition[1884]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 21:17:44.274411 ignition[1884]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 21:17:44.273059 unknown[1884]: wrote ssh authorized keys file for user: core Jan 16 21:17:44.280020 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 16 21:17:44.280020 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 16 21:17:44.331674 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 21:17:44.375108 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 16 21:17:44.378896 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 21:17:44.378896 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 21:17:44.378896 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 21:17:44.378896 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 21:17:44.378896 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 21:17:44.378896 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 21:17:44.378896 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 21:17:44.378896 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 21:17:44.403866 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 21:17:44.403866 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 21:17:44.403866 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:17:44.403866 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:17:44.403866 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:17:44.403866 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 16 21:17:44.765744 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 21:17:46.498842 ignition[1884]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:17:46.502902 ignition[1884]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 21:17:46.512097 ignition[1884]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 21:17:46.519316 ignition[1884]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 21:17:46.519316 ignition[1884]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 21:17:46.519316 ignition[1884]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 16 21:17:46.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.531998 kernel: audit: type=1130 audit(1768598266.526:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.532019 ignition[1884]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 21:17:46.532019 ignition[1884]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 21:17:46.532019 ignition[1884]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 21:17:46.532019 ignition[1884]: INFO : files: files passed Jan 16 21:17:46.532019 ignition[1884]: INFO : Ignition finished successfully Jan 16 21:17:46.523112 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 21:17:46.531465 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 21:17:46.538944 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 21:17:46.547871 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 21:17:46.547994 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 21:17:46.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.561849 kernel: audit: type=1130 audit(1768598266.557:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.568855 initrd-setup-root-after-ignition[1915]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:17:46.568855 initrd-setup-root-after-ignition[1915]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:17:46.573765 initrd-setup-root-after-ignition[1919]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:17:46.576303 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 21:17:46.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.580050 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 21:17:46.584029 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 21:17:46.616168 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 21:17:46.616252 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 21:17:46.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.620043 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 21:17:46.622878 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 21:17:46.624340 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 21:17:46.625942 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 21:17:46.649187 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 21:17:46.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.653068 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 21:17:46.671245 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 21:17:46.671431 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:17:46.676011 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:17:46.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.679979 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 21:17:46.681520 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 21:17:46.681621 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 21:17:46.682148 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 21:17:46.682373 systemd[1]: Stopped target basic.target - Basic System. Jan 16 21:17:46.682641 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 21:17:46.682904 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 21:17:46.683165 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 21:17:46.683433 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 16 21:17:46.683669 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 21:17:46.695283 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 21:17:46.713638 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 21:17:46.715980 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 21:17:46.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.716932 systemd[1]: Stopped target swap.target - Swaps. Jan 16 21:17:46.717155 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 21:17:46.717252 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 21:17:46.717735 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:17:46.718077 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:17:46.718512 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 21:17:46.719926 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:17:46.720011 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 21:17:46.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.720105 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 21:17:46.720498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 21:17:46.720594 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 21:17:46.721028 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 21:17:46.721111 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 21:17:46.721533 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 21:17:46.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.768947 ignition[1939]: INFO : Ignition 2.24.0 Jan 16 21:17:46.768947 ignition[1939]: INFO : Stage: umount Jan 16 21:17:46.768947 ignition[1939]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:17:46.768947 ignition[1939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 16 21:17:46.768947 ignition[1939]: INFO : umount: umount passed Jan 16 21:17:46.768947 ignition[1939]: INFO : Ignition finished successfully Jan 16 21:17:46.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.721626 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 21:17:46.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.722392 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 21:17:46.734994 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 21:17:46.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.735174 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:17:46.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.759036 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 21:17:46.762487 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 21:17:46.764168 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:17:46.767917 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 21:17:46.768033 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:17:46.774016 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 21:17:46.774124 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 21:17:46.776337 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 21:17:46.777314 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 21:17:46.785801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 21:17:46.785882 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 21:17:46.788218 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 21:17:46.788286 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 21:17:46.794944 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 21:17:46.794989 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 21:17:46.796920 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 21:17:46.796976 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 21:17:46.799107 systemd[1]: Stopped target network.target - Network. Jan 16 21:17:46.800503 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 21:17:46.800553 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 21:17:46.804018 systemd[1]: Stopped target paths.target - Path Units. Jan 16 21:17:46.807999 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 21:17:46.809864 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:17:46.828072 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 21:17:46.830806 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 21:17:46.837083 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 21:17:46.837120 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 21:17:46.846596 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 21:17:46.846635 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 21:17:46.849741 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 16 21:17:46.849768 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:17:46.854888 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 21:17:46.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.854938 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 21:17:46.855412 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 21:17:46.855447 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 21:17:46.855704 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 21:17:46.855930 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 21:17:46.857019 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 21:17:46.860940 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 21:17:46.861019 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 21:17:46.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.874629 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 21:17:46.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.874710 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 21:17:46.879000 audit: BPF prog-id=6 op=UNLOAD Jan 16 21:17:46.879000 audit: BPF prog-id=9 op=UNLOAD Jan 16 21:17:46.880780 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 16 21:17:46.884915 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 21:17:46.884957 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:17:46.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.888929 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 21:17:46.889229 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 21:17:46.889279 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 21:17:46.889517 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 21:17:46.889546 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:17:46.889746 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 21:17:46.889770 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 21:17:46.889811 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:17:46.913343 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 21:17:46.919985 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:17:46.935903 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d725fe7 eth0: Data path switched from VF: enP30832s1 Jan 16 21:17:46.936069 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 16 21:17:46.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.924445 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 21:17:46.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.924498 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 21:17:46.924752 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 21:17:46.924779 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:17:46.925279 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 21:17:46.925318 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 21:17:46.925605 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 21:17:46.925640 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 21:17:46.925850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 21:17:46.925883 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 21:17:46.927977 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 21:17:46.938869 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 16 21:17:46.938919 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:17:46.939960 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 21:17:46.940002 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:17:46.940120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:17:46.940150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:17:46.940636 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 21:17:46.940703 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 21:17:46.984748 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 21:17:46.984829 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 21:17:46.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:46.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:47.180770 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 21:17:47.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:47.180937 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 21:17:47.183192 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 21:17:47.187220 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 21:17:47.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:47.187269 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 21:17:47.191885 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 21:17:47.229213 systemd[1]: Switching root. Jan 16 21:17:47.267127 systemd-journald[1102]: Journal stopped Jan 16 21:17:49.054899 systemd-journald[1102]: Received SIGTERM from PID 1 (systemd). Jan 16 21:17:49.054922 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 21:17:49.054935 kernel: SELinux: policy capability open_perms=1 Jan 16 21:17:49.054945 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 21:17:49.054953 kernel: SELinux: policy capability always_check_network=0 Jan 16 21:17:49.054960 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 21:17:49.054968 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 21:17:49.054977 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 21:17:49.054989 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 21:17:49.054997 kernel: SELinux: policy capability userspace_initial_context=0 Jan 16 21:17:49.055010 systemd[1]: Successfully loaded SELinux policy in 74.565ms. Jan 16 21:17:49.055022 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.399ms. Jan 16 21:17:49.055058 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 21:17:49.055071 systemd[1]: Detected virtualization microsoft. Jan 16 21:17:49.055085 systemd[1]: Detected architecture x86-64. Jan 16 21:17:49.055097 systemd[1]: Detected first boot. Jan 16 21:17:49.055109 systemd[1]: Hostname set to . Jan 16 21:17:49.055126 systemd[1]: Initializing machine ID from random generator. Jan 16 21:17:49.055137 kernel: kauditd_printk_skb: 45 callbacks suppressed Jan 16 21:17:49.055149 kernel: audit: type=1334 audit(1768598267.979:91): prog-id=10 op=LOAD Jan 16 21:17:49.055158 kernel: audit: type=1334 audit(1768598267.979:92): prog-id=10 op=UNLOAD Jan 16 21:17:49.055170 kernel: audit: type=1334 audit(1768598267.979:93): prog-id=11 op=LOAD Jan 16 21:17:49.055181 kernel: audit: type=1334 audit(1768598267.979:94): prog-id=11 op=UNLOAD Jan 16 21:17:49.055196 zram_generator::config[1983]: No configuration found. Jan 16 21:17:49.055209 kernel: Guest personality initialized and is inactive Jan 16 21:17:49.055220 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 16 21:17:49.055230 kernel: Initialized host personality Jan 16 21:17:49.055240 kernel: NET: Registered PF_VSOCK protocol family Jan 16 21:17:49.055252 systemd[1]: Populated /etc with preset unit settings. Jan 16 21:17:49.055264 kernel: audit: type=1334 audit(1768598268.643:95): prog-id=12 op=LOAD Jan 16 21:17:49.055277 kernel: audit: type=1334 audit(1768598268.643:96): prog-id=3 op=UNLOAD Jan 16 21:17:49.055289 kernel: audit: type=1334 audit(1768598268.644:97): prog-id=13 op=LOAD Jan 16 21:17:49.055302 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 21:17:49.055313 kernel: audit: type=1334 audit(1768598268.644:98): prog-id=14 op=LOAD Jan 16 21:17:49.055327 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 21:17:49.055339 kernel: audit: type=1334 audit(1768598268.644:99): prog-id=4 op=UNLOAD Jan 16 21:17:49.055351 kernel: audit: type=1334 audit(1768598268.644:100): prog-id=5 op=UNLOAD Jan 16 21:17:49.055362 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 21:17:49.055378 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 21:17:49.055390 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 21:17:49.055409 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 21:17:49.055419 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 21:17:49.055432 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 21:17:49.055449 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 21:17:49.055464 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 21:17:49.055477 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 21:17:49.055488 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:17:49.055500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:17:49.055512 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 21:17:49.055525 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 21:17:49.055536 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 21:17:49.055545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 21:17:49.055554 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 21:17:49.055563 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:17:49.055572 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:17:49.055583 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 21:17:49.055593 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 21:17:49.055601 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 21:17:49.055611 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 21:17:49.055620 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:17:49.055629 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 21:17:49.055639 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 16 21:17:49.055649 systemd[1]: Reached target slices.target - Slice Units. Jan 16 21:17:49.055658 systemd[1]: Reached target swap.target - Swaps. Jan 16 21:17:49.055667 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 21:17:49.055677 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 21:17:49.055688 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 16 21:17:49.055697 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:17:49.055707 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 16 21:17:49.055716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:17:49.055725 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 16 21:17:49.055735 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 16 21:17:49.055746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 21:17:49.055756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:17:49.055765 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 21:17:49.055775 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 21:17:49.055786 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 21:17:49.055796 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 21:17:49.055806 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:17:49.055817 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 21:17:49.055827 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 21:17:49.055853 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 21:17:49.055863 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 21:17:49.055872 systemd[1]: Reached target machines.target - Containers. Jan 16 21:17:49.055882 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 21:17:49.055893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:17:49.055902 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 21:17:49.055912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 21:17:49.055921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 21:17:49.055931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 21:17:49.055940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 21:17:49.055950 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 21:17:49.055961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 21:17:49.055970 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 21:17:49.055979 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 21:17:49.055989 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 21:17:49.055998 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 21:17:49.056008 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 21:17:49.056018 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:17:49.056028 kernel: fuse: init (API version 7.41) Jan 16 21:17:49.056037 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 21:17:49.056046 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 21:17:49.056056 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 21:17:49.056066 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 21:17:49.056076 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 16 21:17:49.056087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 21:17:49.056097 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:17:49.056106 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 21:17:49.056116 kernel: ACPI: bus type drm_connector registered Jan 16 21:17:49.056125 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 21:17:49.056134 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 21:17:49.056160 systemd-journald[2076]: Collecting audit messages is enabled. Jan 16 21:17:49.056183 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 21:17:49.056192 systemd-journald[2076]: Journal started Jan 16 21:17:49.056214 systemd-journald[2076]: Runtime Journal (/run/log/journal/f7951080f5ab48b19010e01391be9dfd) is 8M, max 158.5M, 150.5M free. Jan 16 21:17:48.787000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 16 21:17:48.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:48.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:48.966000 audit: BPF prog-id=14 op=UNLOAD Jan 16 21:17:48.966000 audit: BPF prog-id=13 op=UNLOAD Jan 16 21:17:48.967000 audit: BPF prog-id=15 op=LOAD Jan 16 21:17:48.968000 audit: BPF prog-id=16 op=LOAD Jan 16 21:17:48.968000 audit: BPF prog-id=17 op=LOAD Jan 16 21:17:49.050000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 16 21:17:49.050000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc3a76ddb0 a2=4000 a3=0 items=0 ppid=1 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:49.050000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 16 21:17:48.634988 systemd[1]: Queued start job for default target multi-user.target. Jan 16 21:17:48.645279 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 16 21:17:48.646169 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 21:17:49.062861 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 21:17:49.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.068433 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 21:17:49.071963 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 21:17:49.075191 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:17:49.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.077124 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 21:17:49.077320 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 21:17:49.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.082092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 21:17:49.082242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 21:17:49.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.085130 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 21:17:49.085309 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 21:17:49.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.087902 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 21:17:49.088079 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 21:17:49.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.090791 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 21:17:49.090968 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 21:17:49.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.093815 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 21:17:49.094007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 21:17:49.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.096765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 21:17:49.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.099438 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 21:17:49.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.102246 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:17:49.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.105722 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 21:17:49.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.108813 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 16 21:17:49.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.118092 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 21:17:49.121022 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 16 21:17:49.124958 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 21:17:49.128920 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 21:17:49.131954 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 21:17:49.131979 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 21:17:49.135903 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 16 21:17:49.137316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:17:49.137396 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:17:49.141902 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 21:17:49.146008 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 21:17:49.147376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 21:17:49.150408 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 21:17:49.152699 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 21:17:49.153421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 21:17:49.157995 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 21:17:49.160947 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 21:17:49.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.167719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:17:49.170818 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 21:17:49.172283 systemd-journald[2076]: Time spent on flushing to /var/log/journal/f7951080f5ab48b19010e01391be9dfd is 26.047ms for 1126 entries. Jan 16 21:17:49.172283 systemd-journald[2076]: System Journal (/var/log/journal/f7951080f5ab48b19010e01391be9dfd) is 8M, max 2.2G, 2.2G free. Jan 16 21:17:49.216228 systemd-journald[2076]: Received client request to flush runtime journal. Jan 16 21:17:49.216326 kernel: loop1: detected capacity change from 0 to 111560 Jan 16 21:17:49.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.174405 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 21:17:49.176592 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 21:17:49.180571 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 21:17:49.187960 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 16 21:17:49.209877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:17:49.217629 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 21:17:49.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.245664 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 16 21:17:49.269914 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 21:17:49.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.272000 audit: BPF prog-id=18 op=LOAD Jan 16 21:17:49.272000 audit: BPF prog-id=19 op=LOAD Jan 16 21:17:49.272000 audit: BPF prog-id=20 op=LOAD Jan 16 21:17:49.273941 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 16 21:17:49.277000 audit: BPF prog-id=21 op=LOAD Jan 16 21:17:49.280980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 21:17:49.283902 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 21:17:49.291000 audit: BPF prog-id=22 op=LOAD Jan 16 21:17:49.292000 audit: BPF prog-id=23 op=LOAD Jan 16 21:17:49.292000 audit: BPF prog-id=24 op=LOAD Jan 16 21:17:49.293378 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 21:17:49.297000 audit: BPF prog-id=25 op=LOAD Jan 16 21:17:49.297000 audit: BPF prog-id=26 op=LOAD Jan 16 21:17:49.297000 audit: BPF prog-id=27 op=LOAD Jan 16 21:17:49.298810 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 16 21:17:49.344868 kernel: loop2: detected capacity change from 0 to 25512 Jan 16 21:17:49.350454 systemd-tmpfiles[2143]: ACLs are not supported, ignoring. Jan 16 21:17:49.350873 systemd-tmpfiles[2143]: ACLs are not supported, ignoring. Jan 16 21:17:49.353677 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:17:49.355333 systemd-nsresourced[2146]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 16 21:17:49.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.362316 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 21:17:49.365188 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 16 21:17:49.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.459854 kernel: loop3: detected capacity change from 0 to 50784 Jan 16 21:17:49.469197 systemd-oomd[2141]: No swap; memory pressure usage will be degraded Jan 16 21:17:49.469813 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 16 21:17:49.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.477481 systemd-resolved[2142]: Positive Trust Anchors: Jan 16 21:17:49.477491 systemd-resolved[2142]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 21:17:49.477494 systemd-resolved[2142]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 21:17:49.477527 systemd-resolved[2142]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 21:17:49.527129 systemd-resolved[2142]: Using system hostname 'ci-4580.0.0-p-452f1e7704'. Jan 16 21:17:49.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.528064 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 21:17:49.530301 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:17:49.562851 kernel: loop4: detected capacity change from 0 to 224512 Jan 16 21:17:49.612230 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 21:17:49.612849 kernel: loop5: detected capacity change from 0 to 111560 Jan 16 21:17:49.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:49.614000 audit: BPF prog-id=8 op=UNLOAD Jan 16 21:17:49.614000 audit: BPF prog-id=7 op=UNLOAD Jan 16 21:17:49.615000 audit: BPF prog-id=28 op=LOAD Jan 16 21:17:49.615000 audit: BPF prog-id=29 op=LOAD Jan 16 21:17:49.616430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:17:49.624850 kernel: loop6: detected capacity change from 0 to 25512 Jan 16 21:17:49.631875 kernel: loop7: detected capacity change from 0 to 50784 Jan 16 21:17:49.641884 kernel: loop1: detected capacity change from 0 to 224512 Jan 16 21:17:49.642985 systemd-udevd[2170]: Using default interface naming scheme 'v257'. Jan 16 21:17:49.648928 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 21:17:49.650913 (sd-merge)[2168]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 16 21:17:49.652777 (sd-merge)[2168]: Merged extensions into '/usr'. Jan 16 21:17:49.655745 systemd[1]: Reload requested from client PID 2125 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 21:17:49.655756 systemd[1]: Reloading... Jan 16 21:17:49.693854 zram_generator::config[2196]: No configuration found. Jan 16 21:17:49.822865 kernel: hv_vmbus: registering driver hv_balloon Jan 16 21:17:49.827889 kernel: hv_vmbus: registering driver hyperv_fb Jan 16 21:17:49.837946 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 16 21:17:49.839857 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 16 21:17:49.843860 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 16 21:17:49.845971 kernel: Console: switching to colour dummy device 80x25 Jan 16 21:17:49.849868 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 21:17:49.852852 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 21:17:49.880855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#229 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 16 21:17:50.035502 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 21:17:50.035826 systemd[1]: Reloading finished in 379 ms. Jan 16 21:17:50.049356 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:17:50.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.053230 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 21:17:50.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.090976 systemd[1]: Starting ensure-sysext.service... Jan 16 21:17:50.092000 audit: BPF prog-id=30 op=LOAD Jan 16 21:17:50.098902 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 21:17:50.103394 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 21:17:50.110943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:17:50.114000 audit: BPF prog-id=31 op=LOAD Jan 16 21:17:50.114000 audit: BPF prog-id=25 op=UNLOAD Jan 16 21:17:50.114000 audit: BPF prog-id=32 op=LOAD Jan 16 21:17:50.115000 audit: BPF prog-id=33 op=LOAD Jan 16 21:17:50.115000 audit: BPF prog-id=26 op=UNLOAD Jan 16 21:17:50.115000 audit: BPF prog-id=27 op=UNLOAD Jan 16 21:17:50.117000 audit: BPF prog-id=34 op=LOAD Jan 16 21:17:50.117000 audit: BPF prog-id=15 op=UNLOAD Jan 16 21:17:50.117000 audit: BPF prog-id=35 op=LOAD Jan 16 21:17:50.117000 audit: BPF prog-id=36 op=LOAD Jan 16 21:17:50.117000 audit: BPF prog-id=16 op=UNLOAD Jan 16 21:17:50.117000 audit: BPF prog-id=17 op=UNLOAD Jan 16 21:17:50.117000 audit: BPF prog-id=37 op=LOAD Jan 16 21:17:50.117000 audit: BPF prog-id=38 op=LOAD Jan 16 21:17:50.117000 audit: BPF prog-id=28 op=UNLOAD Jan 16 21:17:50.117000 audit: BPF prog-id=29 op=UNLOAD Jan 16 21:17:50.118000 audit: BPF prog-id=39 op=LOAD Jan 16 21:17:50.118000 audit: BPF prog-id=21 op=UNLOAD Jan 16 21:17:50.118000 audit: BPF prog-id=40 op=LOAD Jan 16 21:17:50.118000 audit: BPF prog-id=22 op=UNLOAD Jan 16 21:17:50.118000 audit: BPF prog-id=41 op=LOAD Jan 16 21:17:50.118000 audit: BPF prog-id=42 op=LOAD Jan 16 21:17:50.118000 audit: BPF prog-id=23 op=UNLOAD Jan 16 21:17:50.118000 audit: BPF prog-id=24 op=UNLOAD Jan 16 21:17:50.119000 audit: BPF prog-id=43 op=LOAD Jan 16 21:17:50.120000 audit: BPF prog-id=18 op=UNLOAD Jan 16 21:17:50.120000 audit: BPF prog-id=44 op=LOAD Jan 16 21:17:50.120000 audit: BPF prog-id=45 op=LOAD Jan 16 21:17:50.120000 audit: BPF prog-id=19 op=UNLOAD Jan 16 21:17:50.120000 audit: BPF prog-id=20 op=UNLOAD Jan 16 21:17:50.154608 systemd-tmpfiles[2321]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 16 21:17:50.154631 systemd-tmpfiles[2321]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 16 21:17:50.154826 systemd-tmpfiles[2321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 21:17:50.155748 systemd-tmpfiles[2321]: ACLs are not supported, ignoring. Jan 16 21:17:50.155791 systemd-tmpfiles[2321]: ACLs are not supported, ignoring. Jan 16 21:17:50.171123 systemd-tmpfiles[2321]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 21:17:50.171531 systemd-tmpfiles[2321]: Skipping /boot Jan 16 21:17:50.172002 systemd[1]: Reload requested from client PID 2319 ('systemctl') (unit ensure-sysext.service)... Jan 16 21:17:50.172015 systemd[1]: Reloading... Jan 16 21:17:50.179764 systemd-tmpfiles[2321]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 21:17:50.179851 systemd-tmpfiles[2321]: Skipping /boot Jan 16 21:17:50.200032 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 16 21:17:50.260928 zram_generator::config[2365]: No configuration found. Jan 16 21:17:50.272800 systemd-networkd[2320]: lo: Link UP Jan 16 21:17:50.272811 systemd-networkd[2320]: lo: Gained carrier Jan 16 21:17:50.276306 systemd-networkd[2320]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:17:50.276317 systemd-networkd[2320]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 21:17:50.279557 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 16 21:17:50.280905 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 16 21:17:50.282161 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d725fe7 eth0: Data path switched to VF: enP30832s1 Jan 16 21:17:50.282931 systemd-networkd[2320]: enP30832s1: Link UP Jan 16 21:17:50.283530 systemd-networkd[2320]: eth0: Link UP Jan 16 21:17:50.283591 systemd-networkd[2320]: eth0: Gained carrier Jan 16 21:17:50.283636 systemd-networkd[2320]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:17:50.287388 systemd-networkd[2320]: enP30832s1: Gained carrier Jan 16 21:17:50.291882 systemd-networkd[2320]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 16 21:17:50.425330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 16 21:17:50.428212 systemd[1]: Reloading finished in 255 ms. Jan 16 21:17:50.445387 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 21:17:50.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.447000 audit: BPF prog-id=46 op=LOAD Jan 16 21:17:50.447000 audit: BPF prog-id=43 op=UNLOAD Jan 16 21:17:50.447000 audit: BPF prog-id=47 op=LOAD Jan 16 21:17:50.447000 audit: BPF prog-id=48 op=LOAD Jan 16 21:17:50.447000 audit: BPF prog-id=44 op=UNLOAD Jan 16 21:17:50.447000 audit: BPF prog-id=45 op=UNLOAD Jan 16 21:17:50.448000 audit: BPF prog-id=49 op=LOAD Jan 16 21:17:50.448000 audit: BPF prog-id=34 op=UNLOAD Jan 16 21:17:50.448000 audit: BPF prog-id=50 op=LOAD Jan 16 21:17:50.448000 audit: BPF prog-id=51 op=LOAD Jan 16 21:17:50.448000 audit: BPF prog-id=35 op=UNLOAD Jan 16 21:17:50.448000 audit: BPF prog-id=36 op=UNLOAD Jan 16 21:17:50.448000 audit: BPF prog-id=52 op=LOAD Jan 16 21:17:50.448000 audit: BPF prog-id=39 op=UNLOAD Jan 16 21:17:50.449000 audit: BPF prog-id=53 op=LOAD Jan 16 21:17:50.449000 audit: BPF prog-id=40 op=UNLOAD Jan 16 21:17:50.449000 audit: BPF prog-id=54 op=LOAD Jan 16 21:17:50.449000 audit: BPF prog-id=55 op=LOAD Jan 16 21:17:50.449000 audit: BPF prog-id=41 op=UNLOAD Jan 16 21:17:50.449000 audit: BPF prog-id=42 op=UNLOAD Jan 16 21:17:50.450000 audit: BPF prog-id=56 op=LOAD Jan 16 21:17:50.450000 audit: BPF prog-id=30 op=UNLOAD Jan 16 21:17:50.450000 audit: BPF prog-id=57 op=LOAD Jan 16 21:17:50.459000 audit: BPF prog-id=31 op=UNLOAD Jan 16 21:17:50.459000 audit: BPF prog-id=58 op=LOAD Jan 16 21:17:50.459000 audit: BPF prog-id=59 op=LOAD Jan 16 21:17:50.459000 audit: BPF prog-id=32 op=UNLOAD Jan 16 21:17:50.459000 audit: BPF prog-id=33 op=UNLOAD Jan 16 21:17:50.459000 audit: BPF prog-id=60 op=LOAD Jan 16 21:17:50.459000 audit: BPF prog-id=61 op=LOAD Jan 16 21:17:50.459000 audit: BPF prog-id=37 op=UNLOAD Jan 16 21:17:50.459000 audit: BPF prog-id=38 op=UNLOAD Jan 16 21:17:50.462647 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:17:50.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.466421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:17:50.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.492778 systemd[1]: Reached target network.target - Network. Jan 16 21:17:50.495580 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 21:17:50.498654 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 21:17:50.501054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:17:50.508948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 21:17:50.513036 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 21:17:50.516998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 21:17:50.523573 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 21:17:50.528067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:17:50.528152 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:17:50.529304 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 21:17:50.534934 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 21:17:50.537288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:17:50.538480 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 21:17:50.543982 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 16 21:17:50.547644 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 21:17:50.550598 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 21:17:50.554373 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 21:17:50.556695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:17:50.556903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:17:50.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.559161 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:17:50.567595 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:17:50.571902 systemd[1]: Finished ensure-sysext.service. Jan 16 21:17:50.572000 audit[2445]: SYSTEM_BOOT pid=2445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.573775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 21:17:50.574207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 21:17:50.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.576098 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 21:17:50.576519 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 21:17:50.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.579077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 21:17:50.579330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 21:17:50.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.582188 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 21:17:50.582344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 21:17:50.585140 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 21:17:50.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.595987 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 21:17:50.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.602061 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 21:17:50.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.606556 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 21:17:50.606597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 21:17:50.608524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:17:50.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.618455 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 16 21:17:50.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:50.649375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:17:50.649394 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:17:50.661000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 16 21:17:50.661000 audit[2476]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbe4d7a40 a2=420 a3=0 items=0 ppid=2428 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:50.662015 augenrules[2476]: No rules Jan 16 21:17:50.661000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:17:50.662358 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 21:17:50.662523 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 21:17:50.736434 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 21:17:50.738028 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 21:17:51.751934 ldconfig[2435]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 21:17:51.759511 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 21:17:51.762687 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 21:17:51.776521 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 21:17:51.777962 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 21:17:51.780986 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 21:17:51.782369 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 21:17:51.784897 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 16 21:17:51.786219 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 21:17:51.788930 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 21:17:51.790480 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 16 21:17:51.792068 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 16 21:17:51.793234 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 21:17:51.795888 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 21:17:51.795915 systemd[1]: Reached target paths.target - Path Units. Jan 16 21:17:51.796960 systemd[1]: Reached target timers.target - Timer Units. Jan 16 21:17:51.798793 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 21:17:51.802543 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 21:17:51.805031 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 16 21:17:51.806728 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 16 21:17:51.809918 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 16 21:17:51.814003 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 21:17:51.815482 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 16 21:17:51.819312 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 21:17:51.821239 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 21:17:51.823882 systemd[1]: Reached target basic.target - Basic System. Jan 16 21:17:51.824982 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 21:17:51.824998 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 21:17:51.828950 systemd-networkd[2320]: eth0: Gained IPv6LL Jan 16 21:17:51.830052 systemd[1]: Starting chronyd.service - NTP client/server... Jan 16 21:17:51.831849 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 21:17:51.836466 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 21:17:51.843940 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 21:17:51.849944 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 21:17:51.855975 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 21:17:51.859934 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 21:17:51.861819 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 21:17:51.863997 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 16 21:17:51.866096 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 16 21:17:51.866966 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 16 21:17:51.869391 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 16 21:17:51.872426 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 21:17:51.876961 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 21:17:51.878115 jq[2496]: false Jan 16 21:17:51.883828 KVP[2499]: KVP starting; pid is:2499 Jan 16 21:17:51.884819 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 21:17:51.888347 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 21:17:51.893877 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 21:17:51.894296 KVP[2499]: KVP LIC Version: 3.1 Jan 16 21:17:51.894848 kernel: hv_utils: KVP IC version 4.0 Jan 16 21:17:51.896196 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 21:17:51.901611 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 21:17:51.901898 google_oslogin_nss_cache[2498]: oslogin_cache_refresh[2498]: Refreshing passwd entry cache Jan 16 21:17:51.899772 oslogin_cache_refresh[2498]: Refreshing passwd entry cache Jan 16 21:17:51.905354 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 21:17:51.910968 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 21:17:51.914517 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 21:17:51.922049 chronyd[2488]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 16 21:17:51.922427 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 21:17:51.925382 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 21:17:51.925568 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 21:17:51.927245 chronyd[2488]: Timezone right/UTC failed leap second check, ignoring Jan 16 21:17:51.927581 chronyd[2488]: Loaded seccomp filter (level 2) Jan 16 21:17:51.928422 google_oslogin_nss_cache[2498]: oslogin_cache_refresh[2498]: Failure getting users, quitting Jan 16 21:17:51.928422 google_oslogin_nss_cache[2498]: oslogin_cache_refresh[2498]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 16 21:17:51.928422 google_oslogin_nss_cache[2498]: oslogin_cache_refresh[2498]: Refreshing group entry cache Jan 16 21:17:51.927619 oslogin_cache_refresh[2498]: Failure getting users, quitting Jan 16 21:17:51.927631 oslogin_cache_refresh[2498]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 16 21:17:51.927660 oslogin_cache_refresh[2498]: Refreshing group entry cache Jan 16 21:17:51.929243 extend-filesystems[2497]: Found /dev/nvme0n1p6 Jan 16 21:17:51.931287 systemd[1]: Started chronyd.service - NTP client/server. Jan 16 21:17:51.934338 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 21:17:51.934512 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 21:17:51.940506 extend-filesystems[2497]: Found /dev/nvme0n1p9 Jan 16 21:17:51.942975 update_engine[2506]: I20260116 21:17:51.942917 2506 main.cc:92] Flatcar Update Engine starting Jan 16 21:17:51.948094 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 21:17:51.949157 extend-filesystems[2497]: Checking size of /dev/nvme0n1p9 Jan 16 21:17:51.953926 google_oslogin_nss_cache[2498]: oslogin_cache_refresh[2498]: Failure getting groups, quitting Jan 16 21:17:51.953926 google_oslogin_nss_cache[2498]: oslogin_cache_refresh[2498]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 16 21:17:51.950165 oslogin_cache_refresh[2498]: Failure getting groups, quitting Jan 16 21:17:51.950172 oslogin_cache_refresh[2498]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 16 21:17:51.955417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:17:51.965659 jq[2507]: true Jan 16 21:17:51.965938 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 21:17:51.968184 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 16 21:17:51.968389 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 16 21:17:51.995651 tar[2516]: linux-amd64/LICENSE Jan 16 21:17:51.995809 tar[2516]: linux-amd64/helm Jan 16 21:17:52.002473 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 21:17:52.002693 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 21:17:52.009072 extend-filesystems[2497]: Resized partition /dev/nvme0n1p9 Jan 16 21:17:52.020318 extend-filesystems[2550]: resize2fs 1.47.3 (8-Jul-2025) Jan 16 21:17:52.027861 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 16 21:17:52.030343 jq[2541]: true Jan 16 21:17:52.086919 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 16 21:17:52.086962 update_engine[2506]: I20260116 21:17:52.064213 2506 update_check_scheduler.cc:74] Next update check in 6m30s Jan 16 21:17:52.039019 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 21:17:52.038864 dbus-daemon[2491]: [system] SELinux support is enabled Jan 16 21:17:52.049096 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 21:17:52.049116 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 21:17:52.051575 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 21:17:52.051590 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 21:17:52.061930 systemd[1]: Started update-engine.service - Update Engine. Jan 16 21:17:52.065676 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 21:17:52.102771 extend-filesystems[2550]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 16 21:17:52.102771 extend-filesystems[2550]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 16 21:17:52.102771 extend-filesystems[2550]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 16 21:17:52.115472 extend-filesystems[2497]: Resized filesystem in /dev/nvme0n1p9 Jan 16 21:17:52.104879 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 21:17:52.107898 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 21:17:52.125494 systemd-logind[2504]: New seat seat0. Jan 16 21:17:52.126599 systemd-logind[2504]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 16 21:17:52.127015 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 21:17:52.132939 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 21:17:52.168240 bash[2580]: Updated "/home/core/.ssh/authorized_keys" Jan 16 21:17:52.166880 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 21:17:52.193271 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 16 21:17:52.197692 coreos-metadata[2490]: Jan 16 21:17:52.197 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 16 21:17:52.201201 coreos-metadata[2490]: Jan 16 21:17:52.201 INFO Fetch successful Jan 16 21:17:52.201276 coreos-metadata[2490]: Jan 16 21:17:52.201 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 16 21:17:52.247278 coreos-metadata[2490]: Jan 16 21:17:52.247 INFO Fetch successful Jan 16 21:17:52.247278 coreos-metadata[2490]: Jan 16 21:17:52.247 INFO Fetching http://168.63.129.16/machine/5e55b45d-e75c-4fd2-be3c-ca04cfc71650/f66b24e4%2Df298%2D4cdc%2D9ab4%2Dee45079ad5bf.%5Fci%2D4580.0.0%2Dp%2D452f1e7704?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 16 21:17:52.285662 coreos-metadata[2490]: Jan 16 21:17:52.285 INFO Fetch successful Jan 16 21:17:52.285662 coreos-metadata[2490]: Jan 16 21:17:52.285 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 16 21:17:52.292201 coreos-metadata[2490]: Jan 16 21:17:52.292 INFO Fetch successful Jan 16 21:17:52.325665 locksmithd[2558]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 21:17:52.335550 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 21:17:52.337691 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 21:17:52.525862 sshd_keygen[2515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 21:17:52.546551 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 21:17:52.551034 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 21:17:52.557246 containerd[2539]: time="2026-01-16T21:17:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 16 21:17:52.557246 containerd[2539]: time="2026-01-16T21:17:52.552327173Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 16 21:17:52.555255 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 16 21:17:52.578512 containerd[2539]: time="2026-01-16T21:17:52.577758838Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.912µs" Jan 16 21:17:52.579376 containerd[2539]: time="2026-01-16T21:17:52.579351916Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 16 21:17:52.579460 containerd[2539]: time="2026-01-16T21:17:52.579448977Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 16 21:17:52.579496 containerd[2539]: time="2026-01-16T21:17:52.579488616Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 16 21:17:52.580369 containerd[2539]: time="2026-01-16T21:17:52.580351666Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 16 21:17:52.580444 containerd[2539]: time="2026-01-16T21:17:52.580434869Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 21:17:52.580518 containerd[2539]: time="2026-01-16T21:17:52.580507693Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 21:17:52.580552 containerd[2539]: time="2026-01-16T21:17:52.580545104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 21:17:52.580721 containerd[2539]: time="2026-01-16T21:17:52.580709181Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 21:17:52.581538 containerd[2539]: time="2026-01-16T21:17:52.581517092Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 21:17:52.581614 containerd[2539]: time="2026-01-16T21:17:52.581601344Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584333270Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584490619Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584505037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584566239Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584683449Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584702523Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584711270Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584737122Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584935149Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 16 21:17:52.585070 containerd[2539]: time="2026-01-16T21:17:52.584980319Z" level=info msg="metadata content store policy set" policy=shared Jan 16 21:17:52.585579 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 21:17:52.585792 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 21:17:52.591008 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 21:17:52.599970 containerd[2539]: time="2026-01-16T21:17:52.599946530Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 16 21:17:52.600144 containerd[2539]: time="2026-01-16T21:17:52.600075051Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600196677Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600222874Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600238369Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600248054Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600257771Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600265537Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600274834Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600284362Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600296889Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600314219Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600321757Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600331035Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 16 21:17:52.600555 containerd[2539]: time="2026-01-16T21:17:52.600417683Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600431840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600443009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600451324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600459356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600468388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600487724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600500260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600510055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600518591Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600526643Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600548655Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600594109Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600604842Z" level=info msg="Start snapshots syncer" Jan 16 21:17:52.600792 containerd[2539]: time="2026-01-16T21:17:52.600621879Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 16 21:17:52.602247 containerd[2539]: time="2026-01-16T21:17:52.602100434Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 16 21:17:52.602247 containerd[2539]: time="2026-01-16T21:17:52.602156249Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602204557Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602312100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602356671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602368092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602376743Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602392616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602401691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602410518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602418888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 16 21:17:52.602582 containerd[2539]: time="2026-01-16T21:17:52.602427474Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 16 21:17:52.603015 containerd[2539]: time="2026-01-16T21:17:52.602871928Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 21:17:52.603015 containerd[2539]: time="2026-01-16T21:17:52.602893999Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 21:17:52.603015 containerd[2539]: time="2026-01-16T21:17:52.602902771Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 21:17:52.603553 containerd[2539]: time="2026-01-16T21:17:52.602912342Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 21:17:52.603590 containerd[2539]: time="2026-01-16T21:17:52.603563399Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 16 21:17:52.603590 containerd[2539]: time="2026-01-16T21:17:52.603578497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 16 21:17:52.603590 containerd[2539]: time="2026-01-16T21:17:52.603587956Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 16 21:17:52.603746 containerd[2539]: time="2026-01-16T21:17:52.603598798Z" level=info msg="runtime interface created" Jan 16 21:17:52.603746 containerd[2539]: time="2026-01-16T21:17:52.603603588Z" level=info msg="created NRI interface" Jan 16 21:17:52.603746 containerd[2539]: time="2026-01-16T21:17:52.603610700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 16 21:17:52.603746 containerd[2539]: time="2026-01-16T21:17:52.603621241Z" level=info msg="Connect containerd service" Jan 16 21:17:52.603746 containerd[2539]: time="2026-01-16T21:17:52.603646018Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 21:17:52.606221 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 16 21:17:52.610301 containerd[2539]: time="2026-01-16T21:17:52.606753875Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 21:17:52.617959 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 21:17:52.623281 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 21:17:52.628027 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 21:17:52.630383 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 21:17:52.709372 tar[2516]: linux-amd64/README.md Jan 16 21:17:52.723507 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 21:17:52.781676 containerd[2539]: time="2026-01-16T21:17:52.781656903Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 21:17:52.781777 containerd[2539]: time="2026-01-16T21:17:52.781767825Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 21:17:52.781825 containerd[2539]: time="2026-01-16T21:17:52.781818225Z" level=info msg="Start subscribing containerd event" Jan 16 21:17:52.781982 containerd[2539]: time="2026-01-16T21:17:52.781963678Z" level=info msg="Start recovering state" Jan 16 21:17:52.782061 containerd[2539]: time="2026-01-16T21:17:52.782055419Z" level=info msg="Start event monitor" Jan 16 21:17:52.782129 containerd[2539]: time="2026-01-16T21:17:52.782086567Z" level=info msg="Start cni network conf syncer for default" Jan 16 21:17:52.782178 containerd[2539]: time="2026-01-16T21:17:52.782172494Z" level=info msg="Start streaming server" Jan 16 21:17:52.782214 containerd[2539]: time="2026-01-16T21:17:52.782208371Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 16 21:17:52.782244 containerd[2539]: time="2026-01-16T21:17:52.782238570Z" level=info msg="runtime interface starting up..." Jan 16 21:17:52.782270 containerd[2539]: time="2026-01-16T21:17:52.782265195Z" level=info msg="starting plugins..." Jan 16 21:17:52.782301 containerd[2539]: time="2026-01-16T21:17:52.782295677Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 16 21:17:52.787411 containerd[2539]: time="2026-01-16T21:17:52.786926291Z" level=info msg="containerd successfully booted in 0.236280s" Jan 16 21:17:52.787350 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 21:17:53.168651 waagent[2625]: 2026-01-16T21:17:53.168485Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 16 21:17:53.173909 waagent[2625]: 2026-01-16T21:17:53.171907Z INFO Daemon Daemon OS: flatcar 4580.0.0 Jan 16 21:17:53.175940 waagent[2625]: 2026-01-16T21:17:53.174901Z INFO Daemon Daemon Python: 3.12.11 Jan 16 21:17:53.178416 waagent[2625]: 2026-01-16T21:17:53.178062Z INFO Daemon Daemon Run daemon Jan 16 21:17:53.179220 waagent[2625]: 2026-01-16T21:17:53.179186Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4580.0.0' Jan 16 21:17:53.181514 waagent[2625]: 2026-01-16T21:17:53.181397Z INFO Daemon Daemon Using waagent for provisioning Jan 16 21:17:53.184112 waagent[2625]: 2026-01-16T21:17:53.184068Z INFO Daemon Daemon Activate resource disk Jan 16 21:17:53.185381 waagent[2625]: 2026-01-16T21:17:53.185347Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 16 21:17:53.188431 waagent[2625]: 2026-01-16T21:17:53.188397Z INFO Daemon Daemon Found device: None Jan 16 21:17:53.191050 waagent[2625]: 2026-01-16T21:17:53.190918Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 16 21:17:53.192660 waagent[2625]: 2026-01-16T21:17:53.192628Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 16 21:17:53.195585 waagent[2625]: 2026-01-16T21:17:53.195465Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 16 21:17:53.197700 waagent[2625]: 2026-01-16T21:17:53.197666Z INFO Daemon Daemon Running default provisioning handler Jan 16 21:17:53.206724 waagent[2625]: 2026-01-16T21:17:53.205498Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 16 21:17:53.210607 waagent[2625]: 2026-01-16T21:17:53.210570Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 16 21:17:53.214954 waagent[2625]: 2026-01-16T21:17:53.214908Z INFO Daemon Daemon cloud-init is enabled: False Jan 16 21:17:53.218375 waagent[2625]: 2026-01-16T21:17:53.216644Z INFO Daemon Daemon Copying ovf-env.xml Jan 16 21:17:53.253534 waagent[2625]: 2026-01-16T21:17:53.253489Z INFO Daemon Daemon Successfully mounted dvd Jan 16 21:17:53.265604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:17:53.267301 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 21:17:53.270852 systemd[1]: Startup finished in 2.808s (kernel) + 7.678s (initrd) + 5.535s (userspace) = 16.021s. Jan 16 21:17:53.272308 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 16 21:17:53.274909 waagent[2625]: 2026-01-16T21:17:53.274875Z INFO Daemon Daemon Detect protocol endpoint Jan 16 21:17:53.275244 waagent[2625]: 2026-01-16T21:17:53.275007Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 16 21:17:53.275244 waagent[2625]: 2026-01-16T21:17:53.275280Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 16 21:17:53.284690 waagent[2625]: 2026-01-16T21:17:53.275434Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 16 21:17:53.284690 waagent[2625]: 2026-01-16T21:17:53.275567Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 16 21:17:53.284690 waagent[2625]: 2026-01-16T21:17:53.275720Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 16 21:17:53.283970 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:17:53.288322 waagent[2625]: 2026-01-16T21:17:53.286739Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 16 21:17:53.290386 waagent[2625]: 2026-01-16T21:17:53.289411Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 16 21:17:53.290386 waagent[2625]: 2026-01-16T21:17:53.289535Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 16 21:17:53.377091 login[2630]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:53.384816 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 21:17:53.388156 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 21:17:53.397810 systemd-logind[2504]: New session 1 of user core. Jan 16 21:17:53.404051 waagent[2625]: 2026-01-16T21:17:53.404008Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 16 21:17:53.406859 waagent[2625]: 2026-01-16T21:17:53.404253Z INFO Daemon Daemon Forcing an update of the goal state. Jan 16 21:17:53.408570 waagent[2625]: 2026-01-16T21:17:53.408526Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 16 21:17:53.418062 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 21:17:53.421110 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 21:17:53.425982 waagent[2625]: 2026-01-16T21:17:53.425784Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 16 21:17:53.427913 waagent[2625]: 2026-01-16T21:17:53.427460Z INFO Daemon Jan 16 21:17:53.429188 waagent[2625]: 2026-01-16T21:17:53.428460Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 52849014-8eee-4359-ae56-e82f746ee476 eTag: 13560280107228136022 source: Fabric] Jan 16 21:17:53.431391 waagent[2625]: 2026-01-16T21:17:53.431286Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 16 21:17:53.431605 waagent[2625]: 2026-01-16T21:17:53.431569Z INFO Daemon Jan 16 21:17:53.432211 waagent[2625]: 2026-01-16T21:17:53.431656Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 16 21:17:53.435465 (systemd)[2673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:53.438850 waagent[2625]: 2026-01-16T21:17:53.437813Z INFO Daemon Daemon Downloading artifacts profile blob Jan 16 21:17:53.443036 systemd-logind[2504]: New session 2 of user core. Jan 16 21:17:53.586516 systemd[2673]: Queued start job for default target default.target. Jan 16 21:17:53.590515 systemd[2673]: Created slice app.slice - User Application Slice. Jan 16 21:17:53.590545 systemd[2673]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 16 21:17:53.590558 systemd[2673]: Reached target paths.target - Paths. Jan 16 21:17:53.590588 systemd[2673]: Reached target timers.target - Timers. Jan 16 21:17:53.592224 systemd[2673]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 21:17:53.595965 systemd[2673]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 16 21:17:53.613872 waagent[2625]: 2026-01-16T21:17:53.612715Z INFO Daemon Downloaded certificate {'thumbprint': 'ABD78B6C25ABECD75ABE4C30DCBD55E5BF92E778', 'hasPrivateKey': True} Jan 16 21:17:53.617716 waagent[2625]: 2026-01-16T21:17:53.616579Z INFO Daemon Fetch goal state completed Jan 16 21:17:53.624561 systemd[2673]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 16 21:17:53.633668 login[2631]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:53.637375 systemd[2673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 21:17:53.637458 systemd[2673]: Reached target sockets.target - Sockets. Jan 16 21:17:53.637489 systemd[2673]: Reached target basic.target - Basic System. Jan 16 21:17:53.637514 systemd[2673]: Reached target default.target - Main User Target. Jan 16 21:17:53.637532 systemd[2673]: Startup finished in 189ms. Jan 16 21:17:53.638934 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 21:17:53.642995 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 21:17:53.645582 systemd-logind[2504]: New session 3 of user core. Jan 16 21:17:53.653007 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 21:17:53.662485 waagent[2625]: 2026-01-16T21:17:53.661727Z INFO Daemon Daemon Starting provisioning Jan 16 21:17:53.663409 waagent[2625]: 2026-01-16T21:17:53.663286Z INFO Daemon Daemon Handle ovf-env.xml. Jan 16 21:17:53.664945 waagent[2625]: 2026-01-16T21:17:53.664612Z INFO Daemon Daemon Set hostname [ci-4580.0.0-p-452f1e7704] Jan 16 21:17:53.668782 waagent[2625]: 2026-01-16T21:17:53.668741Z INFO Daemon Daemon Publish hostname [ci-4580.0.0-p-452f1e7704] Jan 16 21:17:53.671953 waagent[2625]: 2026-01-16T21:17:53.671490Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 16 21:17:53.672638 waagent[2625]: 2026-01-16T21:17:53.672257Z INFO Daemon Daemon Primary interface is [eth0] Jan 16 21:17:53.683381 systemd-networkd[2320]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:17:53.683434 systemd-networkd[2320]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 16 21:17:53.683487 systemd-networkd[2320]: eth0: DHCP lease lost Jan 16 21:17:53.695024 waagent[2625]: 2026-01-16T21:17:53.694975Z INFO Daemon Daemon Create user account if not exists Jan 16 21:17:53.698026 waagent[2625]: 2026-01-16T21:17:53.697003Z INFO Daemon Daemon User core already exists, skip useradd Jan 16 21:17:53.700409 waagent[2625]: 2026-01-16T21:17:53.700268Z INFO Daemon Daemon Configure sudoer Jan 16 21:17:53.704706 waagent[2625]: 2026-01-16T21:17:53.704663Z INFO Daemon Daemon Configure sshd Jan 16 21:17:53.704908 systemd-networkd[2320]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 16 21:17:53.710237 waagent[2625]: 2026-01-16T21:17:53.709576Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 16 21:17:53.710237 waagent[2625]: 2026-01-16T21:17:53.709723Z INFO Daemon Daemon Deploy ssh public key. Jan 16 21:17:53.826820 kubelet[2660]: E0116 21:17:53.826792 2660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:17:53.828174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:17:53.828288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:17:53.828626 systemd[1]: kubelet.service: Consumed 823ms CPU time, 263.6M memory peak. Jan 16 21:17:54.788674 waagent[2625]: 2026-01-16T21:17:54.788601Z INFO Daemon Daemon Provisioning complete Jan 16 21:17:54.801171 waagent[2625]: 2026-01-16T21:17:54.801141Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 16 21:17:54.803214 waagent[2625]: 2026-01-16T21:17:54.801311Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 16 21:17:54.803214 waagent[2625]: 2026-01-16T21:17:54.801521Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 16 21:17:54.901022 waagent[2720]: 2026-01-16T21:17:54.900963Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 16 21:17:54.901223 waagent[2720]: 2026-01-16T21:17:54.901054Z INFO ExtHandler ExtHandler OS: flatcar 4580.0.0 Jan 16 21:17:54.901223 waagent[2720]: 2026-01-16T21:17:54.901103Z INFO ExtHandler ExtHandler Python: 3.12.11 Jan 16 21:17:54.901223 waagent[2720]: 2026-01-16T21:17:54.901143Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 16 21:17:54.909752 waagent[2720]: 2026-01-16T21:17:54.909708Z INFO ExtHandler ExtHandler Distro: flatcar-4580.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.12.11; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 16 21:17:54.909920 waagent[2720]: 2026-01-16T21:17:54.909897Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 16 21:17:54.909984 waagent[2720]: 2026-01-16T21:17:54.909951Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 16 21:17:54.916036 waagent[2720]: 2026-01-16T21:17:54.915991Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 16 21:17:54.927230 waagent[2720]: 2026-01-16T21:17:54.927201Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 16 21:17:54.927547 waagent[2720]: 2026-01-16T21:17:54.927515Z INFO ExtHandler Jan 16 21:17:54.927595 waagent[2720]: 2026-01-16T21:17:54.927574Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4c9d16a1-b04b-43ff-8ac4-1e8b09e58d0f eTag: 13560280107228136022 source: Fabric] Jan 16 21:17:54.927787 waagent[2720]: 2026-01-16T21:17:54.927763Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 16 21:17:54.928120 waagent[2720]: 2026-01-16T21:17:54.928093Z INFO ExtHandler Jan 16 21:17:54.928153 waagent[2720]: 2026-01-16T21:17:54.928136Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 16 21:17:54.931086 waagent[2720]: 2026-01-16T21:17:54.931058Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 16 21:17:54.998154 waagent[2720]: 2026-01-16T21:17:54.998111Z INFO ExtHandler Downloaded certificate {'thumbprint': 'ABD78B6C25ABECD75ABE4C30DCBD55E5BF92E778', 'hasPrivateKey': True} Jan 16 21:17:54.998439 waagent[2720]: 2026-01-16T21:17:54.998412Z INFO ExtHandler Fetch goal state completed Jan 16 21:17:55.007098 waagent[2720]: 2026-01-16T21:17:55.007059Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 16 21:17:55.011138 waagent[2720]: 2026-01-16T21:17:55.011101Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2720 Jan 16 21:17:55.011248 waagent[2720]: 2026-01-16T21:17:55.011226Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 16 21:17:55.011475 waagent[2720]: 2026-01-16T21:17:55.011453Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 16 21:17:55.012418 waagent[2720]: 2026-01-16T21:17:55.012387Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4580.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 16 21:17:55.012688 waagent[2720]: 2026-01-16T21:17:55.012663Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4580.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 16 21:17:55.012787 waagent[2720]: 2026-01-16T21:17:55.012768Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 16 21:17:55.013231 waagent[2720]: 2026-01-16T21:17:55.013207Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 16 21:17:55.021092 waagent[2720]: 2026-01-16T21:17:55.021072Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 16 21:17:55.021216 waagent[2720]: 2026-01-16T21:17:55.021197Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 16 21:17:55.025735 waagent[2720]: 2026-01-16T21:17:55.025407Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 16 21:17:55.029769 systemd[1]: Reload requested from client PID 2735 ('systemctl') (unit waagent.service)... Jan 16 21:17:55.029781 systemd[1]: Reloading... Jan 16 21:17:55.102858 zram_generator::config[2780]: No configuration found. Jan 16 21:17:55.256425 systemd[1]: Reloading finished in 226 ms. Jan 16 21:17:55.273647 waagent[2720]: 2026-01-16T21:17:55.273143Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 16 21:17:55.273647 waagent[2720]: 2026-01-16T21:17:55.273228Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 16 21:17:55.351805 waagent[2720]: 2026-01-16T21:17:55.351763Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 16 21:17:55.352044 waagent[2720]: 2026-01-16T21:17:55.352014Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 16 21:17:55.352538 waagent[2720]: 2026-01-16T21:17:55.352512Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 16 21:17:55.353028 waagent[2720]: 2026-01-16T21:17:55.353002Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 16 21:17:55.353072 waagent[2720]: 2026-01-16T21:17:55.353047Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 16 21:17:55.353135 waagent[2720]: 2026-01-16T21:17:55.353104Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 16 21:17:55.353338 waagent[2720]: 2026-01-16T21:17:55.353313Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 16 21:17:55.353419 waagent[2720]: 2026-01-16T21:17:55.353384Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 16 21:17:55.353663 waagent[2720]: 2026-01-16T21:17:55.353639Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 16 21:17:55.353887 waagent[2720]: 2026-01-16T21:17:55.353796Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 16 21:17:55.353887 waagent[2720]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 16 21:17:55.353887 waagent[2720]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 16 21:17:55.353887 waagent[2720]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 16 21:17:55.353887 waagent[2720]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 16 21:17:55.353887 waagent[2720]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 16 21:17:55.353887 waagent[2720]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 16 21:17:55.354023 waagent[2720]: 2026-01-16T21:17:55.353916Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 16 21:17:55.354042 waagent[2720]: 2026-01-16T21:17:55.354027Z INFO EnvHandler ExtHandler Configure routes Jan 16 21:17:55.354100 waagent[2720]: 2026-01-16T21:17:55.354068Z INFO EnvHandler ExtHandler Gateway:None Jan 16 21:17:55.354100 waagent[2720]: 2026-01-16T21:17:55.354130Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 16 21:17:55.354199 waagent[2720]: 2026-01-16T21:17:55.354182Z INFO EnvHandler ExtHandler Routes:None Jan 16 21:17:55.355001 waagent[2720]: 2026-01-16T21:17:55.354970Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 16 21:17:55.355053 waagent[2720]: 2026-01-16T21:17:55.355019Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 16 21:17:55.355721 waagent[2720]: 2026-01-16T21:17:55.355451Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 16 21:17:55.360626 waagent[2720]: 2026-01-16T21:17:55.360557Z INFO ExtHandler ExtHandler Jan 16 21:17:55.360747 waagent[2720]: 2026-01-16T21:17:55.360726Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f28ed994-93be-4fc8-950b-865fa03c5418 correlation 35c687fe-adba-4b0b-a995-e5eb2faed624 created: 2026-01-16T21:17:25.869003Z] Jan 16 21:17:55.361089 waagent[2720]: 2026-01-16T21:17:55.361064Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 16 21:17:55.361622 waagent[2720]: 2026-01-16T21:17:55.361592Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 16 21:17:55.375602 waagent[2720]: 2026-01-16T21:17:55.375529Z INFO MonitorHandler ExtHandler Network interfaces: Jan 16 21:17:55.375602 waagent[2720]: Executing ['ip', '-a', '-o', 'link']: Jan 16 21:17:55.375602 waagent[2720]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 16 21:17:55.375602 waagent[2720]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:72:5f:e7 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7ced8d725fe7 Jan 16 21:17:55.375602 waagent[2720]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:72:5f:e7 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 16 21:17:55.375602 waagent[2720]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 16 21:17:55.375602 waagent[2720]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 16 21:17:55.375602 waagent[2720]: 2: eth0 inet 10.200.8.41/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 16 21:17:55.375602 waagent[2720]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 16 21:17:55.375602 waagent[2720]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 16 21:17:55.375602 waagent[2720]: 2: eth0 inet6 fe80::7eed:8dff:fe72:5fe7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 16 21:17:55.390132 waagent[2720]: 2026-01-16T21:17:55.390095Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 16 21:17:55.390132 waagent[2720]: Try `iptables -h' or 'iptables --help' for more information.) Jan 16 21:17:55.390449 waagent[2720]: 2026-01-16T21:17:55.390419Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 438481A6-3424-4A30-B619-2B97DA6E866A;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 16 21:17:55.400462 waagent[2720]: 2026-01-16T21:17:55.400418Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 16 21:17:55.400462 waagent[2720]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 16 21:17:55.400462 waagent[2720]: pkts bytes target prot opt in out source destination Jan 16 21:17:55.400462 waagent[2720]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 16 21:17:55.400462 waagent[2720]: pkts bytes target prot opt in out source destination Jan 16 21:17:55.400462 waagent[2720]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 16 21:17:55.400462 waagent[2720]: pkts bytes target prot opt in out source destination Jan 16 21:17:55.400462 waagent[2720]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 16 21:17:55.400462 waagent[2720]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 16 21:17:55.400462 waagent[2720]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 16 21:17:55.402719 waagent[2720]: 2026-01-16T21:17:55.402678Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 16 21:17:55.402719 waagent[2720]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 16 21:17:55.402719 waagent[2720]: pkts bytes target prot opt in out source destination Jan 16 21:17:55.402719 waagent[2720]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 16 21:17:55.402719 waagent[2720]: pkts bytes target prot opt in out source destination Jan 16 21:17:55.402719 waagent[2720]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 16 21:17:55.402719 waagent[2720]: pkts bytes target prot opt in out source destination Jan 16 21:17:55.402719 waagent[2720]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 16 21:17:55.402719 waagent[2720]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 16 21:17:55.402719 waagent[2720]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 16 21:18:04.079061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 21:18:04.080226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:04.609233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:04.615122 (kubelet)[2875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:18:04.646477 kubelet[2875]: E0116 21:18:04.646450 2875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:18:04.649051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:18:04.649170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:18:04.649484 systemd[1]: kubelet.service: Consumed 113ms CPU time, 110.4M memory peak. Jan 16 21:18:12.230714 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 21:18:12.231723 systemd[1]: Started sshd@0-10.200.8.41:22-10.200.16.10:49478.service - OpenSSH per-connection server daemon (10.200.16.10:49478). Jan 16 21:18:12.805713 sshd[2883]: Accepted publickey for core from 10.200.16.10 port 49478 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:18:12.806555 sshd-session[2883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:18:12.809876 systemd-logind[2504]: New session 4 of user core. Jan 16 21:18:12.812983 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 21:18:13.218704 systemd[1]: Started sshd@1-10.200.8.41:22-10.200.16.10:49486.service - OpenSSH per-connection server daemon (10.200.16.10:49486). Jan 16 21:18:13.757093 sshd[2890]: Accepted publickey for core from 10.200.16.10 port 49486 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:18:13.757480 sshd-session[2890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:18:13.761074 systemd-logind[2504]: New session 5 of user core. Jan 16 21:18:13.766964 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 21:18:14.062488 sshd[2894]: Connection closed by 10.200.16.10 port 49486 Jan 16 21:18:14.062797 sshd-session[2890]: pam_unix(sshd:session): session closed for user core Jan 16 21:18:14.065580 systemd[1]: sshd@1-10.200.8.41:22-10.200.16.10:49486.service: Deactivated successfully. Jan 16 21:18:14.067149 systemd-logind[2504]: Session 5 logged out. Waiting for processes to exit. Jan 16 21:18:14.067266 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 21:18:14.068717 systemd-logind[2504]: Removed session 5. Jan 16 21:18:14.175670 systemd[1]: Started sshd@2-10.200.8.41:22-10.200.16.10:49492.service - OpenSSH per-connection server daemon (10.200.16.10:49492). Jan 16 21:18:14.724178 sshd[2900]: Accepted publickey for core from 10.200.16.10 port 49492 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:18:14.724998 sshd-session[2900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:18:14.725630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 21:18:14.727390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:14.730880 systemd-logind[2504]: New session 6 of user core. Jan 16 21:18:14.732633 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 21:18:15.035055 sshd[2907]: Connection closed by 10.200.16.10 port 49492 Jan 16 21:18:15.035476 sshd-session[2900]: pam_unix(sshd:session): session closed for user core Jan 16 21:18:15.037999 systemd-logind[2504]: Session 6 logged out. Waiting for processes to exit. Jan 16 21:18:15.038207 systemd[1]: sshd@2-10.200.8.41:22-10.200.16.10:49492.service: Deactivated successfully. Jan 16 21:18:15.039950 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 21:18:15.040777 systemd-logind[2504]: Removed session 6. Jan 16 21:18:15.146667 systemd[1]: Started sshd@3-10.200.8.41:22-10.200.16.10:49500.service - OpenSSH per-connection server daemon (10.200.16.10:49500). Jan 16 21:18:15.218824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:15.226996 (kubelet)[2921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:18:15.260658 kubelet[2921]: E0116 21:18:15.260626 2921 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:18:15.262023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:18:15.262156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:18:15.262457 systemd[1]: kubelet.service: Consumed 113ms CPU time, 109M memory peak. Jan 16 21:18:15.685436 sshd[2913]: Accepted publickey for core from 10.200.16.10 port 49500 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:18:15.686203 sshd-session[2913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:18:15.689752 systemd-logind[2504]: New session 7 of user core. Jan 16 21:18:15.696986 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 21:18:15.725467 chronyd[2488]: Selected source PHC0 Jan 16 21:18:15.992301 sshd[2929]: Connection closed by 10.200.16.10 port 49500 Jan 16 21:18:15.992950 sshd-session[2913]: pam_unix(sshd:session): session closed for user core Jan 16 21:18:15.994986 systemd[1]: sshd@3-10.200.8.41:22-10.200.16.10:49500.service: Deactivated successfully. Jan 16 21:18:15.996723 systemd-logind[2504]: Session 7 logged out. Waiting for processes to exit. Jan 16 21:18:15.996827 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 21:18:15.998135 systemd-logind[2504]: Removed session 7. Jan 16 21:18:16.104617 systemd[1]: Started sshd@4-10.200.8.41:22-10.200.16.10:49508.service - OpenSSH per-connection server daemon (10.200.16.10:49508). Jan 16 21:18:16.652300 sshd[2935]: Accepted publickey for core from 10.200.16.10 port 49508 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:18:16.652660 sshd-session[2935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:18:16.656186 systemd-logind[2504]: New session 8 of user core. Jan 16 21:18:16.662990 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 21:18:16.898741 sudo[2940]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 21:18:16.898977 sudo[2940]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:18:16.910384 sudo[2940]: pam_unix(sudo:session): session closed for user root Jan 16 21:18:17.013697 sshd[2939]: Connection closed by 10.200.16.10 port 49508 Jan 16 21:18:17.014922 sshd-session[2935]: pam_unix(sshd:session): session closed for user core Jan 16 21:18:17.017461 systemd-logind[2504]: Session 8 logged out. Waiting for processes to exit. Jan 16 21:18:17.017561 systemd[1]: sshd@4-10.200.8.41:22-10.200.16.10:49508.service: Deactivated successfully. Jan 16 21:18:17.018774 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 21:18:17.019789 systemd-logind[2504]: Removed session 8. Jan 16 21:18:17.132669 systemd[1]: Started sshd@5-10.200.8.41:22-10.200.16.10:49512.service - OpenSSH per-connection server daemon (10.200.16.10:49512). Jan 16 21:18:17.680937 sshd[2947]: Accepted publickey for core from 10.200.16.10 port 49512 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:18:17.681655 sshd-session[2947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:18:17.685056 systemd-logind[2504]: New session 9 of user core. Jan 16 21:18:17.690958 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 21:18:17.892012 sudo[2953]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 21:18:17.892210 sudo[2953]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:18:17.895699 sudo[2953]: pam_unix(sudo:session): session closed for user root Jan 16 21:18:17.899470 sudo[2952]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 21:18:17.899689 sudo[2952]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:18:17.904499 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 21:18:17.931426 kernel: kauditd_printk_skb: 147 callbacks suppressed Jan 16 21:18:17.931485 kernel: audit: type=1305 audit(1768598297.929:244): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 21:18:17.929000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 21:18:17.931579 augenrules[2977]: No rules Jan 16 21:18:17.931749 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 21:18:17.929000 audit[2977]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffebde9d020 a2=420 a3=0 items=0 ppid=2958 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:17.932269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 21:18:17.935708 kernel: audit: type=1300 audit(1768598297.929:244): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffebde9d020 a2=420 a3=0 items=0 ppid=2958 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:17.935762 kernel: audit: type=1327 audit(1768598297.929:244): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:18:17.929000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:18:17.937256 kernel: audit: type=1130 audit(1768598297.932:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:17.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:17.936282 sudo[2952]: pam_unix(sudo:session): session closed for user root Jan 16 21:18:17.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:17.938747 kernel: audit: type=1131 audit(1768598297.932:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:17.938819 kernel: audit: type=1106 audit(1768598297.935:247): pid=2952 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:18:17.935000 audit[2952]: USER_END pid=2952 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:18:17.935000 audit[2952]: CRED_DISP pid=2952 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:18:17.941664 kernel: audit: type=1104 audit(1768598297.935:248): pid=2952 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:18:18.041079 sshd[2951]: Connection closed by 10.200.16.10 port 49512 Jan 16 21:18:18.041378 sshd-session[2947]: pam_unix(sshd:session): session closed for user core Jan 16 21:18:18.041000 audit[2947]: USER_END pid=2947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:18:18.045295 systemd[1]: sshd@5-10.200.8.41:22-10.200.16.10:49512.service: Deactivated successfully. Jan 16 21:18:18.047336 kernel: audit: type=1106 audit(1768598298.041:249): pid=2947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:18:18.047382 kernel: audit: type=1104 audit(1768598298.041:250): pid=2947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:18:18.041000 audit[2947]: CRED_DISP pid=2947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:18:18.049877 kernel: audit: type=1131 audit(1768598298.044:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.41:22-10.200.16.10:49512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:18.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.41:22-10.200.16.10:49512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:18.048342 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 21:18:18.050670 systemd-logind[2504]: Session 9 logged out. Waiting for processes to exit. Jan 16 21:18:18.051304 systemd-logind[2504]: Removed session 9. Jan 16 21:18:18.156585 systemd[1]: Started sshd@6-10.200.8.41:22-10.200.16.10:49524.service - OpenSSH per-connection server daemon (10.200.16.10:49524). Jan 16 21:18:18.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.41:22-10.200.16.10:49524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:18.695000 audit[2986]: USER_ACCT pid=2986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:18:18.695996 sshd[2986]: Accepted publickey for core from 10.200.16.10 port 49524 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:18:18.695000 audit[2986]: CRED_ACQ pid=2986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:18:18.695000 audit[2986]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef5eeefa0 a2=3 a3=0 items=0 ppid=1 pid=2986 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:18.695000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:18:18.696773 sshd-session[2986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:18:18.700349 systemd-logind[2504]: New session 10 of user core. Jan 16 21:18:18.705971 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 21:18:18.706000 audit[2986]: USER_START pid=2986 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:18:18.707000 audit[2990]: CRED_ACQ pid=2990 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:18:18.902000 audit[2991]: USER_ACCT pid=2991 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:18:18.903250 sudo[2991]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 21:18:18.902000 audit[2991]: CRED_REFR pid=2991 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:18:18.903467 sudo[2991]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:18:18.902000 audit[2991]: USER_START pid=2991 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:18:19.379961 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 21:18:19.393002 (dockerd)[3011]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 21:18:19.706372 dockerd[3011]: time="2026-01-16T21:18:19.706185577Z" level=info msg="Starting up" Jan 16 21:18:19.708604 dockerd[3011]: time="2026-01-16T21:18:19.708583221Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 16 21:18:19.716786 dockerd[3011]: time="2026-01-16T21:18:19.716767062Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 16 21:18:19.739303 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2897381728-merged.mount: Deactivated successfully. Jan 16 21:18:19.843968 dockerd[3011]: time="2026-01-16T21:18:19.843933310Z" level=info msg="Loading containers: start." Jan 16 21:18:19.854927 kernel: Initializing XFRM netlink socket Jan 16 21:18:19.876000 audit[3058]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.876000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd4b3cb040 a2=0 a3=0 items=0 ppid=3011 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.876000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 21:18:19.878000 audit[3060]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.878000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffcba72f20 a2=0 a3=0 items=0 ppid=3011 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 21:18:19.880000 audit[3062]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.880000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3fb5b120 a2=0 a3=0 items=0 ppid=3011 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.880000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 21:18:19.881000 audit[3064]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.881000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeab775480 a2=0 a3=0 items=0 ppid=3011 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.881000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 21:18:19.883000 audit[3066]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.883000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3c76cec0 a2=0 a3=0 items=0 ppid=3011 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 21:18:19.884000 audit[3068]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.884000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd51e18470 a2=0 a3=0 items=0 ppid=3011 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:18:19.886000 audit[3070]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.886000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe2267b100 a2=0 a3=0 items=0 ppid=3011 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:18:19.887000 audit[3072]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.887000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffefc88ce70 a2=0 a3=0 items=0 ppid=3011 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.887000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 21:18:19.906000 audit[3075]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.906000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd4879e2c0 a2=0 a3=0 items=0 ppid=3011 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.906000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 16 21:18:19.908000 audit[3077]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.908000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd26799580 a2=0 a3=0 items=0 ppid=3011 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 21:18:19.909000 audit[3079]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.909000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff1cf97270 a2=0 a3=0 items=0 ppid=3011 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.909000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 21:18:19.911000 audit[3081]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.911000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe7ce58700 a2=0 a3=0 items=0 ppid=3011 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:18:19.913000 audit[3083]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.913000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffc6816950 a2=0 a3=0 items=0 ppid=3011 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 21:18:19.947000 audit[3113]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.947000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcf8670150 a2=0 a3=0 items=0 ppid=3011 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 21:18:19.949000 audit[3115]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.949000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffde337d210 a2=0 a3=0 items=0 ppid=3011 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.949000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 21:18:19.950000 audit[3117]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.950000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc94852210 a2=0 a3=0 items=0 ppid=3011 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 21:18:19.952000 audit[3119]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.952000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa5e76f30 a2=0 a3=0 items=0 ppid=3011 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 21:18:19.953000 audit[3121]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.953000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffebb77f2e0 a2=0 a3=0 items=0 ppid=3011 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.953000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 21:18:19.955000 audit[3123]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.955000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc9b89d530 a2=0 a3=0 items=0 ppid=3011 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:18:19.956000 audit[3125]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.956000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcb9430140 a2=0 a3=0 items=0 ppid=3011 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.956000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:18:19.958000 audit[3127]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.958000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe352a0bb0 a2=0 a3=0 items=0 ppid=3011 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 21:18:19.960000 audit[3129]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.960000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe1d8f7430 a2=0 a3=0 items=0 ppid=3011 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 16 21:18:19.962000 audit[3131]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.962000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd3d5c9380 a2=0 a3=0 items=0 ppid=3011 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 21:18:19.963000 audit[3133]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.963000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd3b1ba960 a2=0 a3=0 items=0 ppid=3011 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 21:18:19.965000 audit[3135]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.965000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff34d7d770 a2=0 a3=0 items=0 ppid=3011 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:18:19.966000 audit[3137]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.966000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe20052990 a2=0 a3=0 items=0 ppid=3011 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.966000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 21:18:19.970000 audit[3142]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.970000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0a077820 a2=0 a3=0 items=0 ppid=3011 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 21:18:19.972000 audit[3144]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.972000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe7ce9e530 a2=0 a3=0 items=0 ppid=3011 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.972000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 21:18:19.973000 audit[3146]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:19.973000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcb992dc60 a2=0 a3=0 items=0 ppid=3011 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 21:18:19.975000 audit[3148]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.975000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc2ec0b2c0 a2=0 a3=0 items=0 ppid=3011 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 21:18:19.976000 audit[3150]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.976000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffff6dc5b20 a2=0 a3=0 items=0 ppid=3011 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 21:18:19.978000 audit[3152]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:19.978000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe07816ad0 a2=0 a3=0 items=0 ppid=3011 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:19.978000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 21:18:20.011000 audit[3157]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:20.011000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd7273bd10 a2=0 a3=0 items=0 ppid=3011 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:20.011000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 16 21:18:20.013000 audit[3159]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:20.013000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc4f2b34d0 a2=0 a3=0 items=0 ppid=3011 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:20.013000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 16 21:18:20.020000 audit[3167]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:20.020000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd5c45d360 a2=0 a3=0 items=0 ppid=3011 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:20.020000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 16 21:18:20.024000 audit[3172]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:20.024000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc6a5f5a40 a2=0 a3=0 items=0 ppid=3011 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:20.024000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 16 21:18:20.026000 audit[3174]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:20.026000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc561461a0 a2=0 a3=0 items=0 ppid=3011 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:20.026000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 16 21:18:20.027000 audit[3176]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:20.027000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcba2ef090 a2=0 a3=0 items=0 ppid=3011 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:20.027000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 16 21:18:20.029000 audit[3178]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:20.029000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd7c8b8950 a2=0 a3=0 items=0 ppid=3011 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:20.029000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:18:20.030000 audit[3180]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:20.030000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd19c96c40 a2=0 a3=0 items=0 ppid=3011 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:20.030000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 16 21:18:20.031715 systemd-networkd[2320]: docker0: Link UP Jan 16 21:18:20.045395 dockerd[3011]: time="2026-01-16T21:18:20.045358915Z" level=info msg="Loading containers: done." Jan 16 21:18:20.103248 dockerd[3011]: time="2026-01-16T21:18:20.103216729Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 21:18:20.103342 dockerd[3011]: time="2026-01-16T21:18:20.103263183Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 16 21:18:20.103342 dockerd[3011]: time="2026-01-16T21:18:20.103319808Z" level=info msg="Initializing buildkit" Jan 16 21:18:20.138166 dockerd[3011]: time="2026-01-16T21:18:20.138144978Z" level=info msg="Completed buildkit initialization" Jan 16 21:18:20.143045 dockerd[3011]: time="2026-01-16T21:18:20.143008164Z" level=info msg="Daemon has completed initialization" Jan 16 21:18:20.143502 dockerd[3011]: time="2026-01-16T21:18:20.143117774Z" level=info msg="API listen on /run/docker.sock" Jan 16 21:18:20.143220 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 21:18:20.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:21.145274 containerd[2539]: time="2026-01-16T21:18:21.145237646Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 16 21:18:21.900715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536531459.mount: Deactivated successfully. Jan 16 21:18:22.824291 containerd[2539]: time="2026-01-16T21:18:22.824259989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:22.826492 containerd[2539]: time="2026-01-16T21:18:22.826393788Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 16 21:18:22.828660 containerd[2539]: time="2026-01-16T21:18:22.828641800Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:22.831673 containerd[2539]: time="2026-01-16T21:18:22.831648864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:22.832190 containerd[2539]: time="2026-01-16T21:18:22.832170780Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.686891048s" Jan 16 21:18:22.832264 containerd[2539]: time="2026-01-16T21:18:22.832245368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 16 21:18:22.832790 containerd[2539]: time="2026-01-16T21:18:22.832773239Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 16 21:18:24.356859 containerd[2539]: time="2026-01-16T21:18:24.356604209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:24.358675 containerd[2539]: time="2026-01-16T21:18:24.358543460Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 16 21:18:24.360781 containerd[2539]: time="2026-01-16T21:18:24.360761001Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:24.364713 containerd[2539]: time="2026-01-16T21:18:24.364692460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:24.365397 containerd[2539]: time="2026-01-16T21:18:24.365167769Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.53237027s" Jan 16 21:18:24.365397 containerd[2539]: time="2026-01-16T21:18:24.365194309Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 16 21:18:24.365685 containerd[2539]: time="2026-01-16T21:18:24.365667551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 16 21:18:25.512490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 21:18:25.514091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:26.049605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:26.053996 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 16 21:18:26.054055 kernel: audit: type=1130 audit(1768598306.049:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:26.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:26.057021 (kubelet)[3295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:18:26.074490 containerd[2539]: time="2026-01-16T21:18:26.074399645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:26.076816 containerd[2539]: time="2026-01-16T21:18:26.076788674Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 16 21:18:26.078968 containerd[2539]: time="2026-01-16T21:18:26.078946835Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:26.083714 containerd[2539]: time="2026-01-16T21:18:26.083675358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:26.084489 containerd[2539]: time="2026-01-16T21:18:26.084459047Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.718767956s" Jan 16 21:18:26.084544 containerd[2539]: time="2026-01-16T21:18:26.084492675Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 16 21:18:26.085881 containerd[2539]: time="2026-01-16T21:18:26.085616950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 16 21:18:26.094837 kubelet[3295]: E0116 21:18:26.094807 3295 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:18:26.096166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:18:26.096288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:18:26.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:26.099926 kernel: audit: type=1131 audit(1768598306.095:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:26.096624 systemd[1]: kubelet.service: Consumed 124ms CPU time, 108.9M memory peak. Jan 16 21:18:26.903692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902698463.mount: Deactivated successfully. Jan 16 21:18:27.208122 containerd[2539]: time="2026-01-16T21:18:27.208065148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:27.210217 containerd[2539]: time="2026-01-16T21:18:27.210159932Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=0" Jan 16 21:18:27.212535 containerd[2539]: time="2026-01-16T21:18:27.212515297Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:27.215437 containerd[2539]: time="2026-01-16T21:18:27.215404361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:27.215869 containerd[2539]: time="2026-01-16T21:18:27.215624895Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.129927506s" Jan 16 21:18:27.215869 containerd[2539]: time="2026-01-16T21:18:27.215646190Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 16 21:18:27.216023 containerd[2539]: time="2026-01-16T21:18:27.216004632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 16 21:18:27.790194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542203028.mount: Deactivated successfully. Jan 16 21:18:28.439255 containerd[2539]: time="2026-01-16T21:18:28.439226307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:28.441223 containerd[2539]: time="2026-01-16T21:18:28.441050905Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17570073" Jan 16 21:18:28.443331 containerd[2539]: time="2026-01-16T21:18:28.443311272Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:28.446430 containerd[2539]: time="2026-01-16T21:18:28.446407804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:28.447068 containerd[2539]: time="2026-01-16T21:18:28.447047603Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.231021038s" Jan 16 21:18:28.447115 containerd[2539]: time="2026-01-16T21:18:28.447072479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 16 21:18:28.447388 containerd[2539]: time="2026-01-16T21:18:28.447371728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 21:18:28.898726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4261659921.mount: Deactivated successfully. Jan 16 21:18:28.913011 containerd[2539]: time="2026-01-16T21:18:28.912986068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:18:28.915125 containerd[2539]: time="2026-01-16T21:18:28.915104878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 16 21:18:28.917581 containerd[2539]: time="2026-01-16T21:18:28.917551399Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:18:28.921522 containerd[2539]: time="2026-01-16T21:18:28.921485966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:18:28.921905 containerd[2539]: time="2026-01-16T21:18:28.921823646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 474.428205ms" Jan 16 21:18:28.921905 containerd[2539]: time="2026-01-16T21:18:28.921857384Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 16 21:18:28.922327 containerd[2539]: time="2026-01-16T21:18:28.922284443Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 16 21:18:29.667929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034047869.mount: Deactivated successfully. Jan 16 21:18:31.435767 containerd[2539]: time="2026-01-16T21:18:31.435737956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:31.438115 containerd[2539]: time="2026-01-16T21:18:31.438096235Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55859474" Jan 16 21:18:31.440711 containerd[2539]: time="2026-01-16T21:18:31.440674878Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:31.444407 containerd[2539]: time="2026-01-16T21:18:31.444372022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:31.445022 containerd[2539]: time="2026-01-16T21:18:31.444934174Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.522629286s" Jan 16 21:18:31.445022 containerd[2539]: time="2026-01-16T21:18:31.444967624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 16 21:18:33.575102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:33.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:33.575429 systemd[1]: kubelet.service: Consumed 124ms CPU time, 108.9M memory peak. Jan 16 21:18:33.580172 kernel: audit: type=1130 audit(1768598313.574:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:33.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:33.582062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:33.585432 kernel: audit: type=1131 audit(1768598313.574:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:33.604380 systemd[1]: Reload requested from client PID 3448 ('systemctl') (unit session-10.scope)... Jan 16 21:18:33.604484 systemd[1]: Reloading... Jan 16 21:18:33.706852 zram_generator::config[3510]: No configuration found. Jan 16 21:18:33.863852 systemd[1]: Reloading finished in 259 ms. Jan 16 21:18:33.989672 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 21:18:33.989738 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 21:18:33.990223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:33.990274 systemd[1]: kubelet.service: Consumed 68ms CPU time, 78.1M memory peak. Jan 16 21:18:33.994866 kernel: audit: type=1130 audit(1768598313.989:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:33.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:33.994069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:33.992000 audit: BPF prog-id=86 op=LOAD Jan 16 21:18:33.992000 audit: BPF prog-id=79 op=UNLOAD Jan 16 21:18:33.992000 audit: BPF prog-id=87 op=LOAD Jan 16 21:18:33.992000 audit: BPF prog-id=88 op=LOAD Jan 16 21:18:33.992000 audit: BPF prog-id=80 op=UNLOAD Jan 16 21:18:33.992000 audit: BPF prog-id=81 op=UNLOAD Jan 16 21:18:33.994000 audit: BPF prog-id=89 op=LOAD Jan 16 21:18:33.994000 audit: BPF prog-id=70 op=UNLOAD Jan 16 21:18:33.994000 audit: BPF prog-id=90 op=LOAD Jan 16 21:18:33.994000 audit: BPF prog-id=91 op=LOAD Jan 16 21:18:33.994000 audit: BPF prog-id=71 op=UNLOAD Jan 16 21:18:33.994000 audit: BPF prog-id=72 op=UNLOAD Jan 16 21:18:33.996000 audit: BPF prog-id=92 op=LOAD Jan 16 21:18:33.996000 audit: BPF prog-id=66 op=UNLOAD Jan 16 21:18:33.997846 kernel: audit: type=1334 audit(1768598313.992:307): prog-id=86 op=LOAD Jan 16 21:18:33.997875 kernel: audit: type=1334 audit(1768598313.992:308): prog-id=79 op=UNLOAD Jan 16 21:18:33.997895 kernel: audit: type=1334 audit(1768598313.992:309): prog-id=87 op=LOAD Jan 16 21:18:33.997911 kernel: audit: type=1334 audit(1768598313.992:310): prog-id=88 op=LOAD Jan 16 21:18:33.997926 kernel: audit: type=1334 audit(1768598313.992:311): prog-id=80 op=UNLOAD Jan 16 21:18:33.997945 kernel: audit: type=1334 audit(1768598313.992:312): prog-id=81 op=UNLOAD Jan 16 21:18:33.997962 kernel: audit: type=1334 audit(1768598313.994:313): prog-id=89 op=LOAD Jan 16 21:18:33.997000 audit: BPF prog-id=93 op=LOAD Jan 16 21:18:33.997000 audit: BPF prog-id=82 op=UNLOAD Jan 16 21:18:33.997000 audit: BPF prog-id=94 op=LOAD Jan 16 21:18:33.997000 audit: BPF prog-id=67 op=UNLOAD Jan 16 21:18:33.997000 audit: BPF prog-id=95 op=LOAD Jan 16 21:18:33.997000 audit: BPF prog-id=96 op=LOAD Jan 16 21:18:33.997000 audit: BPF prog-id=68 op=UNLOAD Jan 16 21:18:33.997000 audit: BPF prog-id=69 op=UNLOAD Jan 16 21:18:33.998000 audit: BPF prog-id=97 op=LOAD Jan 16 21:18:33.998000 audit: BPF prog-id=83 op=UNLOAD Jan 16 21:18:33.998000 audit: BPF prog-id=98 op=LOAD Jan 16 21:18:33.998000 audit: BPF prog-id=99 op=LOAD Jan 16 21:18:33.998000 audit: BPF prog-id=84 op=UNLOAD Jan 16 21:18:33.998000 audit: BPF prog-id=85 op=UNLOAD Jan 16 21:18:33.999000 audit: BPF prog-id=100 op=LOAD Jan 16 21:18:33.999000 audit: BPF prog-id=76 op=UNLOAD Jan 16 21:18:33.999000 audit: BPF prog-id=101 op=LOAD Jan 16 21:18:33.999000 audit: BPF prog-id=102 op=LOAD Jan 16 21:18:33.999000 audit: BPF prog-id=77 op=UNLOAD Jan 16 21:18:33.999000 audit: BPF prog-id=78 op=UNLOAD Jan 16 21:18:34.000000 audit: BPF prog-id=103 op=LOAD Jan 16 21:18:34.000000 audit: BPF prog-id=104 op=LOAD Jan 16 21:18:34.000000 audit: BPF prog-id=74 op=UNLOAD Jan 16 21:18:34.000000 audit: BPF prog-id=75 op=UNLOAD Jan 16 21:18:34.002000 audit: BPF prog-id=105 op=LOAD Jan 16 21:18:34.002000 audit: BPF prog-id=73 op=UNLOAD Jan 16 21:18:34.668755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:34.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:34.677139 (kubelet)[3565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 21:18:34.706112 kubelet[3565]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:18:34.706276 kubelet[3565]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 21:18:34.706298 kubelet[3565]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:18:34.706371 kubelet[3565]: I0116 21:18:34.706358 3565 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 21:18:34.820235 kubelet[3565]: I0116 21:18:34.820208 3565 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 21:18:34.820235 kubelet[3565]: I0116 21:18:34.820227 3565 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 21:18:34.820420 kubelet[3565]: I0116 21:18:34.820408 3565 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 21:18:34.843271 kubelet[3565]: I0116 21:18:34.843127 3565 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 21:18:34.843578 kubelet[3565]: E0116 21:18:34.843458 3565 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:18:34.850749 kubelet[3565]: I0116 21:18:34.850732 3565 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 21:18:34.852284 kubelet[3565]: I0116 21:18:34.852263 3565 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 21:18:34.853869 kubelet[3565]: I0116 21:18:34.853849 3565 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 21:18:34.854006 kubelet[3565]: I0116 21:18:34.853869 3565 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4580.0.0-p-452f1e7704","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 21:18:34.854112 kubelet[3565]: I0116 21:18:34.854014 3565 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 21:18:34.854112 kubelet[3565]: I0116 21:18:34.854023 3565 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 21:18:34.854150 kubelet[3565]: I0116 21:18:34.854114 3565 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:18:34.856741 kubelet[3565]: I0116 21:18:34.856729 3565 kubelet.go:446] "Attempting to sync node with API server" Jan 16 21:18:34.856792 kubelet[3565]: I0116 21:18:34.856750 3565 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 21:18:34.856792 kubelet[3565]: I0116 21:18:34.856770 3565 kubelet.go:352] "Adding apiserver pod source" Jan 16 21:18:34.856792 kubelet[3565]: I0116 21:18:34.856779 3565 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 21:18:34.860678 kubelet[3565]: I0116 21:18:34.860652 3565 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 21:18:34.861014 kubelet[3565]: I0116 21:18:34.860999 3565 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 21:18:34.861513 kubelet[3565]: W0116 21:18:34.861500 3565 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 21:18:34.863442 kubelet[3565]: I0116 21:18:34.862956 3565 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 21:18:34.863442 kubelet[3565]: I0116 21:18:34.862985 3565 server.go:1287] "Started kubelet" Jan 16 21:18:34.863442 kubelet[3565]: W0116 21:18:34.863091 3565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 16 21:18:34.863442 kubelet[3565]: E0116 21:18:34.863127 3565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:18:34.863442 kubelet[3565]: W0116 21:18:34.863180 3565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4580.0.0-p-452f1e7704&limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 16 21:18:34.863442 kubelet[3565]: E0116 21:18:34.863215 3565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4580.0.0-p-452f1e7704&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:18:34.867851 kubelet[3565]: I0116 21:18:34.867705 3565 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 21:18:34.868389 kubelet[3565]: I0116 21:18:34.868366 3565 server.go:479] "Adding debug handlers to kubelet server" Jan 16 21:18:34.870236 kubelet[3565]: I0116 21:18:34.870147 3565 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 21:18:34.870867 kubelet[3565]: I0116 21:18:34.870785 3565 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 21:18:34.871004 kubelet[3565]: I0116 21:18:34.870993 3565 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 21:18:34.871000 audit[3576]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:34.871000 audit[3576]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc32facaf0 a2=0 a3=0 items=0 ppid=3565 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.871000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 21:18:34.872000 audit[3577]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:34.872000 audit[3577]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8fc711c0 a2=0 a3=0 items=0 ppid=3565 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 21:18:34.874069 kubelet[3565]: E0116 21:18:34.872933 3565 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4580.0.0-p-452f1e7704.188b52cc03c30313 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4580.0.0-p-452f1e7704,UID:ci-4580.0.0-p-452f1e7704,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4580.0.0-p-452f1e7704,},FirstTimestamp:2026-01-16 21:18:34.862969619 +0000 UTC m=+0.182661025,LastTimestamp:2026-01-16 21:18:34.862969619 +0000 UTC m=+0.182661025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4580.0.0-p-452f1e7704,}" Jan 16 21:18:34.874307 kubelet[3565]: I0116 21:18:34.874273 3565 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 21:18:34.875898 kubelet[3565]: I0116 21:18:34.875298 3565 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 21:18:34.875898 kubelet[3565]: E0116 21:18:34.875593 3565 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4580.0.0-p-452f1e7704\" not found" Jan 16 21:18:34.876002 kubelet[3565]: E0116 21:18:34.875826 3565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580.0.0-p-452f1e7704?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="200ms" Jan 16 21:18:34.876099 kubelet[3565]: I0116 21:18:34.876092 3565 reconciler.go:26] "Reconciler: start to sync state" Jan 16 21:18:34.876166 kubelet[3565]: I0116 21:18:34.876161 3565 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 21:18:34.876488 kubelet[3565]: W0116 21:18:34.876454 3565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 16 21:18:34.876571 kubelet[3565]: E0116 21:18:34.876561 3565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:18:34.876937 kubelet[3565]: I0116 21:18:34.876926 3565 factory.go:221] Registration of the systemd container factory successfully Jan 16 21:18:34.877184 kubelet[3565]: I0116 21:18:34.877169 3565 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 21:18:34.876000 audit[3579]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:34.876000 audit[3579]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe665b0870 a2=0 a3=0 items=0 ppid=3565 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:18:34.878548 kubelet[3565]: I0116 21:18:34.878538 3565 factory.go:221] Registration of the containerd container factory successfully Jan 16 21:18:34.878000 audit[3581]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:34.878000 audit[3581]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffea488ab00 a2=0 a3=0 items=0 ppid=3565 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:18:34.881658 kubelet[3565]: E0116 21:18:34.881636 3565 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 21:18:34.889000 audit[3586]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:34.889000 audit[3586]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffec7866260 a2=0 a3=0 items=0 ppid=3565 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 16 21:18:34.891221 kubelet[3565]: I0116 21:18:34.891192 3565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 21:18:34.891000 audit[3587]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:34.891000 audit[3587]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde8b03d20 a2=0 a3=0 items=0 ppid=3565 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 21:18:34.892376 kubelet[3565]: I0116 21:18:34.892365 3565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 21:18:34.892426 kubelet[3565]: I0116 21:18:34.892421 3565 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 21:18:34.892465 kubelet[3565]: I0116 21:18:34.892461 3565 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 21:18:34.892495 kubelet[3565]: I0116 21:18:34.892491 3565 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 21:18:34.892553 kubelet[3565]: E0116 21:18:34.892543 3565 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 21:18:34.892000 audit[3588]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:34.892000 audit[3588]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef3dba290 a2=0 a3=0 items=0 ppid=3565 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 21:18:34.893000 audit[3589]: NETFILTER_CFG table=mangle:52 family=10 entries=1 op=nft_register_chain pid=3589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:34.893000 audit[3589]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd34a23940 a2=0 a3=0 items=0 ppid=3565 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 21:18:34.894000 audit[3591]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_chain pid=3591 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:34.894000 audit[3591]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4ef63600 a2=0 a3=0 items=0 ppid=3565 pid=3591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 21:18:34.894000 audit[3590]: NETFILTER_CFG table=nat:54 family=2 entries=1 op=nft_register_chain pid=3590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:34.894000 audit[3590]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6c50aa50 a2=0 a3=0 items=0 ppid=3565 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 21:18:34.894000 audit[3592]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:34.894000 audit[3592]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7c12d4b0 a2=0 a3=0 items=0 ppid=3565 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 21:18:34.895000 audit[3593]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:34.895000 audit[3593]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3b86fe70 a2=0 a3=0 items=0 ppid=3565 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:34.895000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 21:18:34.897217 kubelet[3565]: I0116 21:18:34.897066 3565 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 21:18:34.897217 kubelet[3565]: I0116 21:18:34.897075 3565 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 21:18:34.897217 kubelet[3565]: I0116 21:18:34.897087 3565 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:18:34.897492 kubelet[3565]: W0116 21:18:34.897465 3565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 16 21:18:34.897527 kubelet[3565]: E0116 21:18:34.897500 3565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:18:34.903530 kubelet[3565]: I0116 21:18:34.903516 3565 policy_none.go:49] "None policy: Start" Jan 16 21:18:34.903530 kubelet[3565]: I0116 21:18:34.903530 3565 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 21:18:34.903605 kubelet[3565]: I0116 21:18:34.903539 3565 state_mem.go:35] "Initializing new in-memory state store" Jan 16 21:18:34.909513 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 21:18:34.921410 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 21:18:34.924646 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 21:18:34.934290 kubelet[3565]: I0116 21:18:34.934274 3565 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 21:18:34.934397 kubelet[3565]: I0116 21:18:34.934388 3565 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 21:18:34.934428 kubelet[3565]: I0116 21:18:34.934400 3565 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 21:18:34.935099 kubelet[3565]: I0116 21:18:34.935090 3565 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 21:18:34.936065 kubelet[3565]: E0116 21:18:34.936048 3565 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 21:18:34.936386 kubelet[3565]: E0116 21:18:34.936253 3565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4580.0.0-p-452f1e7704\" not found" Jan 16 21:18:35.000535 systemd[1]: Created slice kubepods-burstable-pod57e1a4e88a8241a139db9cea8c7eae27.slice - libcontainer container kubepods-burstable-pod57e1a4e88a8241a139db9cea8c7eae27.slice. Jan 16 21:18:35.009932 kubelet[3565]: E0116 21:18:35.009889 3565 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-452f1e7704\" not found" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.010990 systemd[1]: Created slice kubepods-burstable-pod383c9f717329f82284a04d0ab27a3cd9.slice - libcontainer container kubepods-burstable-pod383c9f717329f82284a04d0ab27a3cd9.slice. Jan 16 21:18:35.014646 kubelet[3565]: E0116 21:18:35.014624 3565 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-452f1e7704\" not found" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.016993 systemd[1]: Created slice kubepods-burstable-podde42c8547074b0d2ede804fcc03b8f4d.slice - libcontainer container kubepods-burstable-podde42c8547074b0d2ede804fcc03b8f4d.slice. Jan 16 21:18:35.018409 kubelet[3565]: E0116 21:18:35.018387 3565 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-452f1e7704\" not found" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.036309 kubelet[3565]: I0116 21:18:35.036293 3565 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.036504 kubelet[3565]: E0116 21:18:35.036485 3565 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.077123 kubelet[3565]: I0116 21:18:35.076964 3565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-kubeconfig\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.077123 kubelet[3565]: I0116 21:18:35.076990 3565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de42c8547074b0d2ede804fcc03b8f4d-kubeconfig\") pod \"kube-scheduler-ci-4580.0.0-p-452f1e7704\" (UID: \"de42c8547074b0d2ede804fcc03b8f4d\") " pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.077123 kubelet[3565]: I0116 21:18:35.077007 3565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57e1a4e88a8241a139db9cea8c7eae27-k8s-certs\") pod \"kube-apiserver-ci-4580.0.0-p-452f1e7704\" (UID: \"57e1a4e88a8241a139db9cea8c7eae27\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.077123 kubelet[3565]: I0116 21:18:35.077022 3565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57e1a4e88a8241a139db9cea8c7eae27-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4580.0.0-p-452f1e7704\" (UID: \"57e1a4e88a8241a139db9cea8c7eae27\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.077123 kubelet[3565]: E0116 21:18:35.077028 3565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580.0.0-p-452f1e7704?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="400ms" Jan 16 21:18:35.077267 kubelet[3565]: I0116 21:18:35.077037 3565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-ca-certs\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.077267 kubelet[3565]: I0116 21:18:35.077050 3565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-flexvolume-dir\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.077267 kubelet[3565]: I0116 21:18:35.077064 3565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-k8s-certs\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.077267 kubelet[3565]: I0116 21:18:35.077077 3565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.077267 kubelet[3565]: I0116 21:18:35.077090 3565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57e1a4e88a8241a139db9cea8c7eae27-ca-certs\") pod \"kube-apiserver-ci-4580.0.0-p-452f1e7704\" (UID: \"57e1a4e88a8241a139db9cea8c7eae27\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.238112 kubelet[3565]: I0116 21:18:35.238087 3565 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.238299 kubelet[3565]: E0116 21:18:35.238279 3565 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.311343 containerd[2539]: time="2026-01-16T21:18:35.311316661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4580.0.0-p-452f1e7704,Uid:57e1a4e88a8241a139db9cea8c7eae27,Namespace:kube-system,Attempt:0,}" Jan 16 21:18:35.315998 containerd[2539]: time="2026-01-16T21:18:35.315975570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4580.0.0-p-452f1e7704,Uid:383c9f717329f82284a04d0ab27a3cd9,Namespace:kube-system,Attempt:0,}" Jan 16 21:18:35.320675 containerd[2539]: time="2026-01-16T21:18:35.320652697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4580.0.0-p-452f1e7704,Uid:de42c8547074b0d2ede804fcc03b8f4d,Namespace:kube-system,Attempt:0,}" Jan 16 21:18:35.398793 containerd[2539]: time="2026-01-16T21:18:35.398768499Z" level=info msg="connecting to shim a640590a7b08eb36e79c277ccf279d7811d258bc9030b29c10af2623ffafda5d" address="unix:///run/containerd/s/d8b225e5c42386c267cd2b68fc56e27ae33b56c8e7a57dd7e5410e1e2d82354c" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:18:35.417609 containerd[2539]: time="2026-01-16T21:18:35.417245013Z" level=info msg="connecting to shim f0ca96062f9909515def9fad6698d65d2ca62e8a2afdea7744930e15a6edc614" address="unix:///run/containerd/s/ee143ad97136d48d2e5744115a0da059d5a0e1071c3f2882a0d3213843409c09" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:18:35.426003 containerd[2539]: time="2026-01-16T21:18:35.425980501Z" level=info msg="connecting to shim f5f2a5357027af396fc5f4ff9c5bfb5639ea6d1bc5df76083fa459c5425b7da3" address="unix:///run/containerd/s/4d8a095129e56eede3918227f973954250c654c5145b58bb79100f79ab509a6a" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:18:35.441131 systemd[1]: Started cri-containerd-a640590a7b08eb36e79c277ccf279d7811d258bc9030b29c10af2623ffafda5d.scope - libcontainer container a640590a7b08eb36e79c277ccf279d7811d258bc9030b29c10af2623ffafda5d. Jan 16 21:18:35.444328 systemd[1]: Started cri-containerd-f0ca96062f9909515def9fad6698d65d2ca62e8a2afdea7744930e15a6edc614.scope - libcontainer container f0ca96062f9909515def9fad6698d65d2ca62e8a2afdea7744930e15a6edc614. Jan 16 21:18:35.467955 systemd[1]: Started cri-containerd-f5f2a5357027af396fc5f4ff9c5bfb5639ea6d1bc5df76083fa459c5425b7da3.scope - libcontainer container f5f2a5357027af396fc5f4ff9c5bfb5639ea6d1bc5df76083fa459c5425b7da3. Jan 16 21:18:35.468000 audit: BPF prog-id=106 op=LOAD Jan 16 21:18:35.469000 audit: BPF prog-id=107 op=LOAD Jan 16 21:18:35.469000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3604 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343035393061376230386562333665373963323737636366323739 Jan 16 21:18:35.469000 audit: BPF prog-id=107 op=UNLOAD Jan 16 21:18:35.469000 audit[3630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343035393061376230386562333665373963323737636366323739 Jan 16 21:18:35.469000 audit: BPF prog-id=108 op=LOAD Jan 16 21:18:35.469000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3604 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343035393061376230386562333665373963323737636366323739 Jan 16 21:18:35.469000 audit: BPF prog-id=109 op=LOAD Jan 16 21:18:35.469000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3604 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343035393061376230386562333665373963323737636366323739 Jan 16 21:18:35.469000 audit: BPF prog-id=109 op=UNLOAD Jan 16 21:18:35.469000 audit[3630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343035393061376230386562333665373963323737636366323739 Jan 16 21:18:35.469000 audit: BPF prog-id=108 op=UNLOAD Jan 16 21:18:35.469000 audit[3630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343035393061376230386562333665373963323737636366323739 Jan 16 21:18:35.469000 audit: BPF prog-id=110 op=LOAD Jan 16 21:18:35.469000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3604 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343035393061376230386562333665373963323737636366323739 Jan 16 21:18:35.470000 audit: BPF prog-id=111 op=LOAD Jan 16 21:18:35.471000 audit: BPF prog-id=112 op=LOAD Jan 16 21:18:35.471000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3625 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630636139363036326639393039353135646566396661643636393864 Jan 16 21:18:35.473000 audit: BPF prog-id=112 op=UNLOAD Jan 16 21:18:35.473000 audit[3651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630636139363036326639393039353135646566396661643636393864 Jan 16 21:18:35.473000 audit: BPF prog-id=113 op=LOAD Jan 16 21:18:35.473000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3625 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630636139363036326639393039353135646566396661643636393864 Jan 16 21:18:35.473000 audit: BPF prog-id=114 op=LOAD Jan 16 21:18:35.473000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3625 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630636139363036326639393039353135646566396661643636393864 Jan 16 21:18:35.473000 audit: BPF prog-id=114 op=UNLOAD Jan 16 21:18:35.473000 audit[3651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630636139363036326639393039353135646566396661643636393864 Jan 16 21:18:35.473000 audit: BPF prog-id=113 op=UNLOAD Jan 16 21:18:35.473000 audit[3651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630636139363036326639393039353135646566396661643636393864 Jan 16 21:18:35.473000 audit: BPF prog-id=115 op=LOAD Jan 16 21:18:35.473000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3625 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630636139363036326639393039353135646566396661643636393864 Jan 16 21:18:35.478109 kubelet[3565]: E0116 21:18:35.478085 3565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580.0.0-p-452f1e7704?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="800ms" Jan 16 21:18:35.482000 audit: BPF prog-id=116 op=LOAD Jan 16 21:18:35.482000 audit: BPF prog-id=117 op=LOAD Jan 16 21:18:35.482000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3649 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635663261353335373032376166333936666335663466663963356266 Jan 16 21:18:35.482000 audit: BPF prog-id=117 op=UNLOAD Jan 16 21:18:35.482000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3649 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635663261353335373032376166333936666335663466663963356266 Jan 16 21:18:35.484000 audit: BPF prog-id=118 op=LOAD Jan 16 21:18:35.484000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3649 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635663261353335373032376166333936666335663466663963356266 Jan 16 21:18:35.484000 audit: BPF prog-id=119 op=LOAD Jan 16 21:18:35.484000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3649 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635663261353335373032376166333936666335663466663963356266 Jan 16 21:18:35.484000 audit: BPF prog-id=119 op=UNLOAD Jan 16 21:18:35.484000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3649 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635663261353335373032376166333936666335663466663963356266 Jan 16 21:18:35.484000 audit: BPF prog-id=118 op=UNLOAD Jan 16 21:18:35.484000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3649 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635663261353335373032376166333936666335663466663963356266 Jan 16 21:18:35.484000 audit: BPF prog-id=120 op=LOAD Jan 16 21:18:35.484000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3649 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635663261353335373032376166333936666335663466663963356266 Jan 16 21:18:35.502875 containerd[2539]: time="2026-01-16T21:18:35.502813345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4580.0.0-p-452f1e7704,Uid:57e1a4e88a8241a139db9cea8c7eae27,Namespace:kube-system,Attempt:0,} returns sandbox id \"a640590a7b08eb36e79c277ccf279d7811d258bc9030b29c10af2623ffafda5d\"" Jan 16 21:18:35.513159 containerd[2539]: time="2026-01-16T21:18:35.512755097Z" level=info msg="CreateContainer within sandbox \"a640590a7b08eb36e79c277ccf279d7811d258bc9030b29c10af2623ffafda5d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 21:18:35.531610 containerd[2539]: time="2026-01-16T21:18:35.531590192Z" level=info msg="Container 039be209454090726cc2c8510ca3d4b80916ca996cdb929456b8ed53e9c12d4e: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:18:35.534232 containerd[2539]: time="2026-01-16T21:18:35.534213151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4580.0.0-p-452f1e7704,Uid:383c9f717329f82284a04d0ab27a3cd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5f2a5357027af396fc5f4ff9c5bfb5639ea6d1bc5df76083fa459c5425b7da3\"" Jan 16 21:18:35.535891 containerd[2539]: time="2026-01-16T21:18:35.535876143Z" level=info msg="CreateContainer within sandbox \"f5f2a5357027af396fc5f4ff9c5bfb5639ea6d1bc5df76083fa459c5425b7da3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 21:18:35.537327 containerd[2539]: time="2026-01-16T21:18:35.537307665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4580.0.0-p-452f1e7704,Uid:de42c8547074b0d2ede804fcc03b8f4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0ca96062f9909515def9fad6698d65d2ca62e8a2afdea7744930e15a6edc614\"" Jan 16 21:18:35.538951 containerd[2539]: time="2026-01-16T21:18:35.538763370Z" level=info msg="CreateContainer within sandbox \"f0ca96062f9909515def9fad6698d65d2ca62e8a2afdea7744930e15a6edc614\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 21:18:35.554710 containerd[2539]: time="2026-01-16T21:18:35.554689934Z" level=info msg="CreateContainer within sandbox \"a640590a7b08eb36e79c277ccf279d7811d258bc9030b29c10af2623ffafda5d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"039be209454090726cc2c8510ca3d4b80916ca996cdb929456b8ed53e9c12d4e\"" Jan 16 21:18:35.555141 containerd[2539]: time="2026-01-16T21:18:35.555121224Z" level=info msg="StartContainer for \"039be209454090726cc2c8510ca3d4b80916ca996cdb929456b8ed53e9c12d4e\"" Jan 16 21:18:35.556079 containerd[2539]: time="2026-01-16T21:18:35.556028156Z" level=info msg="connecting to shim 039be209454090726cc2c8510ca3d4b80916ca996cdb929456b8ed53e9c12d4e" address="unix:///run/containerd/s/d8b225e5c42386c267cd2b68fc56e27ae33b56c8e7a57dd7e5410e1e2d82354c" protocol=ttrpc version=3 Jan 16 21:18:35.566967 systemd[1]: Started cri-containerd-039be209454090726cc2c8510ca3d4b80916ca996cdb929456b8ed53e9c12d4e.scope - libcontainer container 039be209454090726cc2c8510ca3d4b80916ca996cdb929456b8ed53e9c12d4e. Jan 16 21:18:35.573000 audit: BPF prog-id=121 op=LOAD Jan 16 21:18:35.573000 audit: BPF prog-id=122 op=LOAD Jan 16 21:18:35.573000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3604 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033396265323039343534303930373236636332633835313063613364 Jan 16 21:18:35.573000 audit: BPF prog-id=122 op=UNLOAD Jan 16 21:18:35.573000 audit[3739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033396265323039343534303930373236636332633835313063613364 Jan 16 21:18:35.573000 audit: BPF prog-id=123 op=LOAD Jan 16 21:18:35.573000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3604 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033396265323039343534303930373236636332633835313063613364 Jan 16 21:18:35.573000 audit: BPF prog-id=124 op=LOAD Jan 16 21:18:35.573000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3604 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033396265323039343534303930373236636332633835313063613364 Jan 16 21:18:35.573000 audit: BPF prog-id=124 op=UNLOAD Jan 16 21:18:35.573000 audit[3739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033396265323039343534303930373236636332633835313063613364 Jan 16 21:18:35.574000 audit: BPF prog-id=123 op=UNLOAD Jan 16 21:18:35.574000 audit[3739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033396265323039343534303930373236636332633835313063613364 Jan 16 21:18:35.574000 audit: BPF prog-id=125 op=LOAD Jan 16 21:18:35.574000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3604 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033396265323039343534303930373236636332633835313063613364 Jan 16 21:18:35.638373 containerd[2539]: time="2026-01-16T21:18:35.638352703Z" level=info msg="StartContainer for \"039be209454090726cc2c8510ca3d4b80916ca996cdb929456b8ed53e9c12d4e\" returns successfully" Jan 16 21:18:35.639496 kubelet[3565]: I0116 21:18:35.639483 3565 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.643498 containerd[2539]: time="2026-01-16T21:18:35.643477330Z" level=info msg="Container baae8d844bf53db79c7d5ac637260f24b4f4f89cc0e06ffa27daef719df68099: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:18:35.658852 containerd[2539]: time="2026-01-16T21:18:35.658677775Z" level=info msg="Container f8f7db1fe0a48a2ce4ec345aec62bed59b997e62a4570bccf42a468c1bc580d1: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:18:35.661212 containerd[2539]: time="2026-01-16T21:18:35.661178232Z" level=info msg="CreateContainer within sandbox \"f5f2a5357027af396fc5f4ff9c5bfb5639ea6d1bc5df76083fa459c5425b7da3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"baae8d844bf53db79c7d5ac637260f24b4f4f89cc0e06ffa27daef719df68099\"" Jan 16 21:18:35.661657 containerd[2539]: time="2026-01-16T21:18:35.661642194Z" level=info msg="StartContainer for \"baae8d844bf53db79c7d5ac637260f24b4f4f89cc0e06ffa27daef719df68099\"" Jan 16 21:18:35.663321 containerd[2539]: time="2026-01-16T21:18:35.663300433Z" level=info msg="connecting to shim baae8d844bf53db79c7d5ac637260f24b4f4f89cc0e06ffa27daef719df68099" address="unix:///run/containerd/s/4d8a095129e56eede3918227f973954250c654c5145b58bb79100f79ab509a6a" protocol=ttrpc version=3 Jan 16 21:18:35.672399 containerd[2539]: time="2026-01-16T21:18:35.672379591Z" level=info msg="CreateContainer within sandbox \"f0ca96062f9909515def9fad6698d65d2ca62e8a2afdea7744930e15a6edc614\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f8f7db1fe0a48a2ce4ec345aec62bed59b997e62a4570bccf42a468c1bc580d1\"" Jan 16 21:18:35.674001 containerd[2539]: time="2026-01-16T21:18:35.673486550Z" level=info msg="StartContainer for \"f8f7db1fe0a48a2ce4ec345aec62bed59b997e62a4570bccf42a468c1bc580d1\"" Jan 16 21:18:35.674488 containerd[2539]: time="2026-01-16T21:18:35.674461441Z" level=info msg="connecting to shim f8f7db1fe0a48a2ce4ec345aec62bed59b997e62a4570bccf42a468c1bc580d1" address="unix:///run/containerd/s/ee143ad97136d48d2e5744115a0da059d5a0e1071c3f2882a0d3213843409c09" protocol=ttrpc version=3 Jan 16 21:18:35.683061 systemd[1]: Started cri-containerd-baae8d844bf53db79c7d5ac637260f24b4f4f89cc0e06ffa27daef719df68099.scope - libcontainer container baae8d844bf53db79c7d5ac637260f24b4f4f89cc0e06ffa27daef719df68099. Jan 16 21:18:35.698996 systemd[1]: Started cri-containerd-f8f7db1fe0a48a2ce4ec345aec62bed59b997e62a4570bccf42a468c1bc580d1.scope - libcontainer container f8f7db1fe0a48a2ce4ec345aec62bed59b997e62a4570bccf42a468c1bc580d1. Jan 16 21:18:35.701000 audit: BPF prog-id=126 op=LOAD Jan 16 21:18:35.702000 audit: BPF prog-id=127 op=LOAD Jan 16 21:18:35.702000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3649 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261616538643834346266353364623739633764356163363337323630 Jan 16 21:18:35.702000 audit: BPF prog-id=127 op=UNLOAD Jan 16 21:18:35.702000 audit[3770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3649 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261616538643834346266353364623739633764356163363337323630 Jan 16 21:18:35.702000 audit: BPF prog-id=128 op=LOAD Jan 16 21:18:35.702000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3649 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261616538643834346266353364623739633764356163363337323630 Jan 16 21:18:35.702000 audit: BPF prog-id=129 op=LOAD Jan 16 21:18:35.702000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3649 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261616538643834346266353364623739633764356163363337323630 Jan 16 21:18:35.702000 audit: BPF prog-id=129 op=UNLOAD Jan 16 21:18:35.702000 audit[3770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3649 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261616538643834346266353364623739633764356163363337323630 Jan 16 21:18:35.702000 audit: BPF prog-id=128 op=UNLOAD Jan 16 21:18:35.702000 audit[3770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3649 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261616538643834346266353364623739633764356163363337323630 Jan 16 21:18:35.702000 audit: BPF prog-id=130 op=LOAD Jan 16 21:18:35.702000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3649 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261616538643834346266353364623739633764356163363337323630 Jan 16 21:18:35.716000 audit: BPF prog-id=131 op=LOAD Jan 16 21:18:35.716000 audit: BPF prog-id=132 op=LOAD Jan 16 21:18:35.716000 audit[3780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3625 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638663764623166653061343861326365346563333435616563363262 Jan 16 21:18:35.716000 audit: BPF prog-id=132 op=UNLOAD Jan 16 21:18:35.716000 audit[3780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638663764623166653061343861326365346563333435616563363262 Jan 16 21:18:35.716000 audit: BPF prog-id=133 op=LOAD Jan 16 21:18:35.716000 audit[3780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3625 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638663764623166653061343861326365346563333435616563363262 Jan 16 21:18:35.716000 audit: BPF prog-id=134 op=LOAD Jan 16 21:18:35.716000 audit[3780]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3625 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638663764623166653061343861326365346563333435616563363262 Jan 16 21:18:35.716000 audit: BPF prog-id=134 op=UNLOAD Jan 16 21:18:35.716000 audit[3780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638663764623166653061343861326365346563333435616563363262 Jan 16 21:18:35.716000 audit: BPF prog-id=133 op=UNLOAD Jan 16 21:18:35.716000 audit[3780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638663764623166653061343861326365346563333435616563363262 Jan 16 21:18:35.716000 audit: BPF prog-id=135 op=LOAD Jan 16 21:18:35.716000 audit[3780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3625 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638663764623166653061343861326365346563333435616563363262 Jan 16 21:18:35.759236 containerd[2539]: time="2026-01-16T21:18:35.759178273Z" level=info msg="StartContainer for \"baae8d844bf53db79c7d5ac637260f24b4f4f89cc0e06ffa27daef719df68099\" returns successfully" Jan 16 21:18:35.794519 containerd[2539]: time="2026-01-16T21:18:35.794496767Z" level=info msg="StartContainer for \"f8f7db1fe0a48a2ce4ec345aec62bed59b997e62a4570bccf42a468c1bc580d1\" returns successfully" Jan 16 21:18:35.901511 kubelet[3565]: E0116 21:18:35.901491 3565 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-452f1e7704\" not found" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.904870 kubelet[3565]: E0116 21:18:35.904830 3565 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-452f1e7704\" not found" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:35.907512 kubelet[3565]: E0116 21:18:35.907494 3565 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-452f1e7704\" not found" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:36.911360 kubelet[3565]: E0116 21:18:36.911052 3565 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-452f1e7704\" not found" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:36.911360 kubelet[3565]: E0116 21:18:36.911284 3565 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580.0.0-p-452f1e7704\" not found" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.124408 kubelet[3565]: E0116 21:18:37.124384 3565 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4580.0.0-p-452f1e7704\" not found" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.250364 update_engine[2506]: I20260116 21:18:37.249867 2506 update_attempter.cc:509] Updating boot flags... Jan 16 21:18:37.256350 kubelet[3565]: I0116 21:18:37.256332 3565 kubelet_node_status.go:78] "Successfully registered node" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.256634 kubelet[3565]: E0116 21:18:37.256624 3565 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4580.0.0-p-452f1e7704\": node \"ci-4580.0.0-p-452f1e7704\" not found" Jan 16 21:18:37.276669 kubelet[3565]: I0116 21:18:37.276652 3565 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.286996 kubelet[3565]: E0116 21:18:37.286980 3565 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4580.0.0-p-452f1e7704\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.287149 kubelet[3565]: I0116 21:18:37.287082 3565 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.289425 kubelet[3565]: E0116 21:18:37.289408 3565 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4580.0.0-p-452f1e7704\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.289577 kubelet[3565]: I0116 21:18:37.289487 3565 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.291803 kubelet[3565]: E0116 21:18:37.291767 3565 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.860677 kubelet[3565]: I0116 21:18:37.860658 3565 apiserver.go:52] "Watching apiserver" Jan 16 21:18:37.877713 kubelet[3565]: I0116 21:18:37.877560 3565 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 21:18:37.909289 kubelet[3565]: I0116 21:18:37.909279 3565 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.912065 kubelet[3565]: E0116 21:18:37.912050 3565 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4580.0.0-p-452f1e7704\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:37.950874 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 16 21:18:38.447673 kubelet[3565]: I0116 21:18:38.447656 3565 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:38.453827 kubelet[3565]: W0116 21:18:38.453776 3565 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:18:39.387807 systemd[1]: Reload requested from client PID 3877 ('systemctl') (unit session-10.scope)... Jan 16 21:18:39.387820 systemd[1]: Reloading... Jan 16 21:18:39.466873 zram_generator::config[3927]: No configuration found. Jan 16 21:18:39.634777 systemd[1]: Reloading finished in 246 ms. Jan 16 21:18:39.654181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:39.664503 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 21:18:39.664741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:39.666987 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 16 21:18:39.667024 kernel: audit: type=1131 audit(1768598319.663:408): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:39.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:39.664789 systemd[1]: kubelet.service: Consumed 473ms CPU time, 131.9M memory peak. Jan 16 21:18:39.670049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:39.670000 audit: BPF prog-id=136 op=LOAD Jan 16 21:18:39.673854 kernel: audit: type=1334 audit(1768598319.670:409): prog-id=136 op=LOAD Jan 16 21:18:39.670000 audit: BPF prog-id=105 op=UNLOAD Jan 16 21:18:39.671000 audit: BPF prog-id=137 op=LOAD Jan 16 21:18:39.676174 kernel: audit: type=1334 audit(1768598319.670:410): prog-id=105 op=UNLOAD Jan 16 21:18:39.676217 kernel: audit: type=1334 audit(1768598319.671:411): prog-id=137 op=LOAD Jan 16 21:18:39.671000 audit: BPF prog-id=138 op=LOAD Jan 16 21:18:39.677336 kernel: audit: type=1334 audit(1768598319.671:412): prog-id=138 op=LOAD Jan 16 21:18:39.671000 audit: BPF prog-id=103 op=UNLOAD Jan 16 21:18:39.678522 kernel: audit: type=1334 audit(1768598319.671:413): prog-id=103 op=UNLOAD Jan 16 21:18:39.671000 audit: BPF prog-id=104 op=UNLOAD Jan 16 21:18:39.682930 kernel: audit: type=1334 audit(1768598319.671:414): prog-id=104 op=UNLOAD Jan 16 21:18:39.682981 kernel: audit: type=1334 audit(1768598319.672:415): prog-id=139 op=LOAD Jan 16 21:18:39.672000 audit: BPF prog-id=139 op=LOAD Jan 16 21:18:39.684598 kernel: audit: type=1334 audit(1768598319.672:416): prog-id=94 op=UNLOAD Jan 16 21:18:39.672000 audit: BPF prog-id=94 op=UNLOAD Jan 16 21:18:39.686225 kernel: audit: type=1334 audit(1768598319.672:417): prog-id=140 op=LOAD Jan 16 21:18:39.672000 audit: BPF prog-id=140 op=LOAD Jan 16 21:18:39.672000 audit: BPF prog-id=141 op=LOAD Jan 16 21:18:39.672000 audit: BPF prog-id=95 op=UNLOAD Jan 16 21:18:39.672000 audit: BPF prog-id=96 op=UNLOAD Jan 16 21:18:39.673000 audit: BPF prog-id=142 op=LOAD Jan 16 21:18:39.673000 audit: BPF prog-id=92 op=UNLOAD Jan 16 21:18:39.679000 audit: BPF prog-id=143 op=LOAD Jan 16 21:18:39.679000 audit: BPF prog-id=100 op=UNLOAD Jan 16 21:18:39.679000 audit: BPF prog-id=144 op=LOAD Jan 16 21:18:39.679000 audit: BPF prog-id=145 op=LOAD Jan 16 21:18:39.679000 audit: BPF prog-id=101 op=UNLOAD Jan 16 21:18:39.679000 audit: BPF prog-id=102 op=UNLOAD Jan 16 21:18:39.680000 audit: BPF prog-id=146 op=LOAD Jan 16 21:18:39.680000 audit: BPF prog-id=93 op=UNLOAD Jan 16 21:18:39.680000 audit: BPF prog-id=147 op=LOAD Jan 16 21:18:39.680000 audit: BPF prog-id=89 op=UNLOAD Jan 16 21:18:39.680000 audit: BPF prog-id=148 op=LOAD Jan 16 21:18:39.680000 audit: BPF prog-id=149 op=LOAD Jan 16 21:18:39.680000 audit: BPF prog-id=90 op=UNLOAD Jan 16 21:18:39.680000 audit: BPF prog-id=91 op=UNLOAD Jan 16 21:18:39.682000 audit: BPF prog-id=150 op=LOAD Jan 16 21:18:39.682000 audit: BPF prog-id=97 op=UNLOAD Jan 16 21:18:39.682000 audit: BPF prog-id=151 op=LOAD Jan 16 21:18:39.682000 audit: BPF prog-id=152 op=LOAD Jan 16 21:18:39.682000 audit: BPF prog-id=98 op=UNLOAD Jan 16 21:18:39.682000 audit: BPF prog-id=99 op=UNLOAD Jan 16 21:18:39.682000 audit: BPF prog-id=153 op=LOAD Jan 16 21:18:39.682000 audit: BPF prog-id=86 op=UNLOAD Jan 16 21:18:39.682000 audit: BPF prog-id=154 op=LOAD Jan 16 21:18:39.682000 audit: BPF prog-id=155 op=LOAD Jan 16 21:18:39.682000 audit: BPF prog-id=87 op=UNLOAD Jan 16 21:18:39.682000 audit: BPF prog-id=88 op=UNLOAD Jan 16 21:18:46.138771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:46.143259 kernel: kauditd_printk_skb: 31 callbacks suppressed Jan 16 21:18:46.143316 kernel: audit: type=1130 audit(1768598326.138:449): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:46.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:46.149078 (kubelet)[3994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 21:18:46.186859 kubelet[3994]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:18:46.186859 kubelet[3994]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 21:18:46.187070 kubelet[3994]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:18:46.187070 kubelet[3994]: I0116 21:18:46.186950 3994 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 21:18:46.194947 kubelet[3994]: I0116 21:18:46.194926 3994 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 21:18:46.194947 kubelet[3994]: I0116 21:18:46.194944 3994 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 21:18:46.195138 kubelet[3994]: I0116 21:18:46.195126 3994 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 21:18:46.195846 kubelet[3994]: I0116 21:18:46.195823 3994 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 21:18:46.197339 kubelet[3994]: I0116 21:18:46.197233 3994 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 21:18:46.200203 kubelet[3994]: I0116 21:18:46.200193 3994 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 21:18:46.202059 kubelet[3994]: I0116 21:18:46.202042 3994 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 21:18:46.202212 kubelet[3994]: I0116 21:18:46.202191 3994 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 21:18:46.202332 kubelet[3994]: I0116 21:18:46.202212 3994 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4580.0.0-p-452f1e7704","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 21:18:46.202419 kubelet[3994]: I0116 21:18:46.202340 3994 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 21:18:46.202419 kubelet[3994]: I0116 21:18:46.202349 3994 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 21:18:46.202419 kubelet[3994]: I0116 21:18:46.202391 3994 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:18:46.202498 kubelet[3994]: I0116 21:18:46.202489 3994 kubelet.go:446] "Attempting to sync node with API server" Jan 16 21:18:46.202519 kubelet[3994]: I0116 21:18:46.202506 3994 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 21:18:46.203858 kubelet[3994]: I0116 21:18:46.202870 3994 kubelet.go:352] "Adding apiserver pod source" Jan 16 21:18:46.203858 kubelet[3994]: I0116 21:18:46.202888 3994 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 21:18:46.206872 kubelet[3994]: I0116 21:18:46.206860 3994 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 21:18:46.207379 kubelet[3994]: I0116 21:18:46.207372 3994 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 21:18:46.208782 kubelet[3994]: I0116 21:18:46.208770 3994 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 21:18:46.208894 kubelet[3994]: I0116 21:18:46.208888 3994 server.go:1287] "Started kubelet" Jan 16 21:18:46.214343 kubelet[3994]: I0116 21:18:46.214095 3994 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 21:18:46.218386 kubelet[3994]: I0116 21:18:46.217888 3994 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 21:18:46.230894 kubelet[3994]: I0116 21:18:46.229397 3994 server.go:479] "Adding debug handlers to kubelet server" Jan 16 21:18:46.234856 kubelet[3994]: I0116 21:18:46.229575 3994 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 21:18:46.237889 kubelet[3994]: I0116 21:18:46.236341 3994 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 21:18:46.237889 kubelet[3994]: I0116 21:18:46.231301 3994 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 21:18:46.237889 kubelet[3994]: I0116 21:18:46.229707 3994 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 21:18:46.237889 kubelet[3994]: I0116 21:18:46.231312 3994 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 21:18:46.237889 kubelet[3994]: I0116 21:18:46.236534 3994 reconciler.go:26] "Reconciler: start to sync state" Jan 16 21:18:46.237889 kubelet[3994]: E0116 21:18:46.231448 3994 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4580.0.0-p-452f1e7704\" not found" Jan 16 21:18:46.240338 kubelet[3994]: I0116 21:18:46.240318 3994 factory.go:221] Registration of the systemd container factory successfully Jan 16 21:18:46.240399 kubelet[3994]: I0116 21:18:46.240389 3994 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 21:18:46.243850 kubelet[3994]: I0116 21:18:46.242410 3994 factory.go:221] Registration of the containerd container factory successfully Jan 16 21:18:46.243850 kubelet[3994]: E0116 21:18:46.243150 3994 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 21:18:46.248490 kubelet[3994]: I0116 21:18:46.248474 3994 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 21:18:46.251236 kubelet[3994]: I0116 21:18:46.251220 3994 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 21:18:46.251334 kubelet[3994]: I0116 21:18:46.251328 3994 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 21:18:46.251396 kubelet[3994]: I0116 21:18:46.251381 3994 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 21:18:46.251432 kubelet[3994]: I0116 21:18:46.251428 3994 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 21:18:46.251502 kubelet[3994]: E0116 21:18:46.251492 3994 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 21:18:46.294918 kubelet[3994]: I0116 21:18:46.294907 3994 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 21:18:46.295041 kubelet[3994]: I0116 21:18:46.295034 3994 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 21:18:46.295099 kubelet[3994]: I0116 21:18:46.295095 3994 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:18:46.295257 kubelet[3994]: I0116 21:18:46.295250 3994 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 21:18:46.295319 kubelet[3994]: I0116 21:18:46.295298 3994 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 21:18:46.295349 kubelet[3994]: I0116 21:18:46.295346 3994 policy_none.go:49] "None policy: Start" Jan 16 21:18:46.295379 kubelet[3994]: I0116 21:18:46.295375 3994 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 21:18:46.295419 kubelet[3994]: I0116 21:18:46.295416 3994 state_mem.go:35] "Initializing new in-memory state store" Jan 16 21:18:46.295568 kubelet[3994]: I0116 21:18:46.295562 3994 state_mem.go:75] "Updated machine memory state" Jan 16 21:18:46.299739 kubelet[3994]: I0116 21:18:46.299707 3994 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 21:18:46.352251 kubelet[3994]: E0116 21:18:46.352229 3994 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 21:18:46.522673 kubelet[3994]: I0116 21:18:46.521989 3994 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 21:18:46.522673 kubelet[3994]: I0116 21:18:46.522004 3994 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 21:18:46.522673 kubelet[3994]: I0116 21:18:46.522221 3994 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 21:18:46.524173 kubelet[3994]: I0116 21:18:46.524158 3994 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 21:18:46.524863 containerd[2539]: time="2026-01-16T21:18:46.524379072Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 21:18:46.525076 kubelet[3994]: I0116 21:18:46.524985 3994 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 21:18:46.525811 kubelet[3994]: E0116 21:18:46.525795 3994 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 21:18:46.552767 kubelet[3994]: I0116 21:18:46.552747 3994 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.554221 kubelet[3994]: I0116 21:18:46.553144 3994 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.554380 kubelet[3994]: I0116 21:18:46.553427 3994 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.559403 kubelet[3994]: W0116 21:18:46.559377 3994 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:18:46.563884 kubelet[3994]: W0116 21:18:46.563754 3994 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:18:46.565304 kubelet[3994]: W0116 21:18:46.565106 3994 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:18:46.565304 kubelet[3994]: E0116 21:18:46.565161 3994 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4580.0.0-p-452f1e7704\" already exists" pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.630650 kubelet[3994]: I0116 21:18:46.630262 3994 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.638492 kubelet[3994]: I0116 21:18:46.638451 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57e1a4e88a8241a139db9cea8c7eae27-k8s-certs\") pod \"kube-apiserver-ci-4580.0.0-p-452f1e7704\" (UID: \"57e1a4e88a8241a139db9cea8c7eae27\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.638492 kubelet[3994]: I0116 21:18:46.638478 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57e1a4e88a8241a139db9cea8c7eae27-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4580.0.0-p-452f1e7704\" (UID: \"57e1a4e88a8241a139db9cea8c7eae27\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.638578 kubelet[3994]: I0116 21:18:46.638495 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-flexvolume-dir\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.638578 kubelet[3994]: I0116 21:18:46.638510 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-k8s-certs\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.638578 kubelet[3994]: I0116 21:18:46.638524 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.638578 kubelet[3994]: I0116 21:18:46.638538 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de42c8547074b0d2ede804fcc03b8f4d-kubeconfig\") pod \"kube-scheduler-ci-4580.0.0-p-452f1e7704\" (UID: \"de42c8547074b0d2ede804fcc03b8f4d\") " pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.638578 kubelet[3994]: I0116 21:18:46.638551 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57e1a4e88a8241a139db9cea8c7eae27-ca-certs\") pod \"kube-apiserver-ci-4580.0.0-p-452f1e7704\" (UID: \"57e1a4e88a8241a139db9cea8c7eae27\") " pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.638697 kubelet[3994]: I0116 21:18:46.638564 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-ca-certs\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.638697 kubelet[3994]: I0116 21:18:46.638578 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/383c9f717329f82284a04d0ab27a3cd9-kubeconfig\") pod \"kube-controller-manager-ci-4580.0.0-p-452f1e7704\" (UID: \"383c9f717329f82284a04d0ab27a3cd9\") " pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.642086 kubelet[3994]: I0116 21:18:46.642061 3994 kubelet_node_status.go:124] "Node was previously registered" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:46.642155 kubelet[3994]: I0116 21:18:46.642115 3994 kubelet_node_status.go:78] "Successfully registered node" node="ci-4580.0.0-p-452f1e7704" Jan 16 21:18:47.203217 kubelet[3994]: I0116 21:18:47.203201 3994 apiserver.go:52] "Watching apiserver" Jan 16 21:18:47.210829 systemd[1]: Created slice kubepods-besteffort-podfda050a9_736f_494d_9ba8_3d6bde4f73a0.slice - libcontainer container kubepods-besteffort-podfda050a9_736f_494d_9ba8_3d6bde4f73a0.slice. Jan 16 21:18:47.226747 kubelet[3994]: I0116 21:18:47.226483 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4580.0.0-p-452f1e7704" podStartSLOduration=1.226470589 podStartE2EDuration="1.226470589s" podCreationTimestamp="2026-01-16 21:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:18:47.218996715 +0000 UTC m=+1.066483632" watchObservedRunningTime="2026-01-16 21:18:47.226470589 +0000 UTC m=+1.073957506" Jan 16 21:18:47.236226 kubelet[3994]: I0116 21:18:47.235586 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" podStartSLOduration=9.235574591 podStartE2EDuration="9.235574591s" podCreationTimestamp="2026-01-16 21:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:18:47.227074544 +0000 UTC m=+1.074561460" watchObservedRunningTime="2026-01-16 21:18:47.235574591 +0000 UTC m=+1.083061498" Jan 16 21:18:47.236844 kubelet[3994]: I0116 21:18:47.236822 3994 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 21:18:47.242246 kubelet[3994]: I0116 21:18:47.242213 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fda050a9-736f-494d-9ba8-3d6bde4f73a0-xtables-lock\") pod \"kube-proxy-6lnsm\" (UID: \"fda050a9-736f-494d-9ba8-3d6bde4f73a0\") " pod="kube-system/kube-proxy-6lnsm" Jan 16 21:18:47.242324 kubelet[3994]: I0116 21:18:47.242254 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fda050a9-736f-494d-9ba8-3d6bde4f73a0-kube-proxy\") pod \"kube-proxy-6lnsm\" (UID: \"fda050a9-736f-494d-9ba8-3d6bde4f73a0\") " pod="kube-system/kube-proxy-6lnsm" Jan 16 21:18:47.242324 kubelet[3994]: I0116 21:18:47.242270 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fda050a9-736f-494d-9ba8-3d6bde4f73a0-lib-modules\") pod \"kube-proxy-6lnsm\" (UID: \"fda050a9-736f-494d-9ba8-3d6bde4f73a0\") " pod="kube-system/kube-proxy-6lnsm" Jan 16 21:18:47.242324 kubelet[3994]: I0116 21:18:47.242286 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqqw2\" (UniqueName: \"kubernetes.io/projected/fda050a9-736f-494d-9ba8-3d6bde4f73a0-kube-api-access-cqqw2\") pod \"kube-proxy-6lnsm\" (UID: \"fda050a9-736f-494d-9ba8-3d6bde4f73a0\") " pod="kube-system/kube-proxy-6lnsm" Jan 16 21:18:47.245774 kubelet[3994]: I0116 21:18:47.245732 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4580.0.0-p-452f1e7704" podStartSLOduration=1.245721547 podStartE2EDuration="1.245721547s" podCreationTimestamp="2026-01-16 21:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:18:47.237278012 +0000 UTC m=+1.084764930" watchObservedRunningTime="2026-01-16 21:18:47.245721547 +0000 UTC m=+1.093208467" Jan 16 21:18:47.269309 kubelet[3994]: I0116 21:18:47.269223 3994 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:47.277061 kubelet[3994]: W0116 21:18:47.277021 3994 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 21:18:47.277124 kubelet[3994]: E0116 21:18:47.277070 3994 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4580.0.0-p-452f1e7704\" already exists" pod="kube-system/kube-scheduler-ci-4580.0.0-p-452f1e7704" Jan 16 21:18:47.438107 systemd[1]: Created slice kubepods-besteffort-pod6051bd58_3025_4c24_995c_805be42b574f.slice - libcontainer container kubepods-besteffort-pod6051bd58_3025_4c24_995c_805be42b574f.slice. Jan 16 21:18:47.444079 kubelet[3994]: I0116 21:18:47.443973 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khnvq\" (UniqueName: \"kubernetes.io/projected/6051bd58-3025-4c24-995c-805be42b574f-kube-api-access-khnvq\") pod \"tigera-operator-7dcd859c48-9wdmt\" (UID: \"6051bd58-3025-4c24-995c-805be42b574f\") " pod="tigera-operator/tigera-operator-7dcd859c48-9wdmt" Jan 16 21:18:47.444079 kubelet[3994]: I0116 21:18:47.444002 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6051bd58-3025-4c24-995c-805be42b574f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9wdmt\" (UID: \"6051bd58-3025-4c24-995c-805be42b574f\") " pod="tigera-operator/tigera-operator-7dcd859c48-9wdmt" Jan 16 21:18:47.521527 containerd[2539]: time="2026-01-16T21:18:47.521501892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6lnsm,Uid:fda050a9-736f-494d-9ba8-3d6bde4f73a0,Namespace:kube-system,Attempt:0,}" Jan 16 21:18:47.743302 containerd[2539]: time="2026-01-16T21:18:47.743232272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9wdmt,Uid:6051bd58-3025-4c24-995c-805be42b574f,Namespace:tigera-operator,Attempt:0,}" Jan 16 21:18:49.576742 waagent[2720]: 2026-01-16T21:18:49.576703Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 16 21:18:49.582467 waagent[2720]: 2026-01-16T21:18:49.582436Z INFO ExtHandler Jan 16 21:18:49.582528 waagent[2720]: 2026-01-16T21:18:49.582515Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 63583c27-c1d6-482b-9327-09bacb1d798c eTag: 3032502043203865424 source: Fabric] Jan 16 21:18:49.582758 waagent[2720]: 2026-01-16T21:18:49.582734Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 16 21:18:49.730185 waagent[2720]: 2026-01-16T21:18:49.730126Z INFO ExtHandler Jan 16 21:18:49.730399 waagent[2720]: 2026-01-16T21:18:49.730364Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 16 21:18:49.764548 waagent[2720]: 2026-01-16T21:18:49.764524Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 16 21:18:50.375269 waagent[2720]: 2026-01-16T21:18:50.375223Z INFO ExtHandler Downloaded certificate {'thumbprint': 'ABD78B6C25ABECD75ABE4C30DCBD55E5BF92E778', 'hasPrivateKey': True} Jan 16 21:18:50.375657 waagent[2720]: 2026-01-16T21:18:50.375631Z INFO ExtHandler Fetch goal state completed Jan 16 21:18:50.375954 waagent[2720]: 2026-01-16T21:18:50.375928Z INFO ExtHandler ExtHandler Jan 16 21:18:50.376002 waagent[2720]: 2026-01-16T21:18:50.375984Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 049619eb-f73e-42e3-8bc9-e73b47f59410 correlation 35c687fe-adba-4b0b-a995-e5eb2faed624 created: 2026-01-16T21:18:40.727798Z] Jan 16 21:18:50.376201 waagent[2720]: 2026-01-16T21:18:50.376179Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 16 21:18:50.376542 waagent[2720]: 2026-01-16T21:18:50.376518Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 16 21:18:50.843998 containerd[2539]: time="2026-01-16T21:18:50.843945978Z" level=info msg="connecting to shim fd090927fc96496e9eead2559c66bc6fe39a757552cbf67d09fa52747027059b" address="unix:///run/containerd/s/969a1f4bb4ea3f3be142a7987189ada2344035c841d7ea958d387532f07c46c1" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:18:50.902970 systemd[1]: Started cri-containerd-fd090927fc96496e9eead2559c66bc6fe39a757552cbf67d09fa52747027059b.scope - libcontainer container fd090927fc96496e9eead2559c66bc6fe39a757552cbf67d09fa52747027059b. Jan 16 21:18:50.909000 audit: BPF prog-id=156 op=LOAD Jan 16 21:18:50.909000 audit: BPF prog-id=157 op=LOAD Jan 16 21:18:50.913565 kernel: audit: type=1334 audit(1768598330.909:450): prog-id=156 op=LOAD Jan 16 21:18:50.913607 kernel: audit: type=1334 audit(1768598330.909:451): prog-id=157 op=LOAD Jan 16 21:18:50.909000 audit[4066]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.918629 kernel: audit: type=1300 audit(1768598330.909:451): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664303930393237666339363439366539656561643235353963363662 Jan 16 21:18:50.925279 kernel: audit: type=1327 audit(1768598330.909:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664303930393237666339363439366539656561643235353963363662 Jan 16 21:18:50.927240 kernel: audit: type=1334 audit(1768598330.909:452): prog-id=157 op=UNLOAD Jan 16 21:18:50.909000 audit: BPF prog-id=157 op=UNLOAD Jan 16 21:18:50.909000 audit[4066]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.931820 kernel: audit: type=1300 audit(1768598330.909:452): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664303930393237666339363439366539656561643235353963363662 Jan 16 21:18:50.909000 audit: BPF prog-id=158 op=LOAD Jan 16 21:18:50.939282 kernel: audit: type=1327 audit(1768598330.909:452): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664303930393237666339363439366539656561643235353963363662 Jan 16 21:18:50.939321 kernel: audit: type=1334 audit(1768598330.909:453): prog-id=158 op=LOAD Jan 16 21:18:50.909000 audit[4066]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.943189 kernel: audit: type=1300 audit(1768598330.909:453): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664303930393237666339363439366539656561643235353963363662 Jan 16 21:18:50.910000 audit: BPF prog-id=159 op=LOAD Jan 16 21:18:50.910000 audit[4066]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664303930393237666339363439366539656561643235353963363662 Jan 16 21:18:50.910000 audit: BPF prog-id=159 op=UNLOAD Jan 16 21:18:50.910000 audit[4066]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664303930393237666339363439366539656561643235353963363662 Jan 16 21:18:50.910000 audit: BPF prog-id=158 op=UNLOAD Jan 16 21:18:50.910000 audit[4066]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664303930393237666339363439366539656561643235353963363662 Jan 16 21:18:50.910000 audit: BPF prog-id=160 op=LOAD Jan 16 21:18:50.910000 audit[4066]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4055 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:50.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664303930393237666339363439366539656561643235353963363662 Jan 16 21:18:54.386017 containerd[2539]: time="2026-01-16T21:18:54.385956696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6lnsm,Uid:fda050a9-736f-494d-9ba8-3d6bde4f73a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd090927fc96496e9eead2559c66bc6fe39a757552cbf67d09fa52747027059b\"" Jan 16 21:18:54.388213 containerd[2539]: time="2026-01-16T21:18:54.388188738Z" level=info msg="CreateContainer within sandbox \"fd090927fc96496e9eead2559c66bc6fe39a757552cbf67d09fa52747027059b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 21:18:54.403786 containerd[2539]: time="2026-01-16T21:18:54.403698246Z" level=info msg="connecting to shim 8c9bdd1997c6d90d41a8ffcc17788f72513c5accbe57f4631ffc57b7f3f78c3e" address="unix:///run/containerd/s/b5d92760be424ab7f923d1c9d8cddaed8d5a4026c676fc12ad53454ea8ece3cc" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:18:54.421996 systemd[1]: Started cri-containerd-8c9bdd1997c6d90d41a8ffcc17788f72513c5accbe57f4631ffc57b7f3f78c3e.scope - libcontainer container 8c9bdd1997c6d90d41a8ffcc17788f72513c5accbe57f4631ffc57b7f3f78c3e. Jan 16 21:18:54.430858 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 16 21:18:54.430924 kernel: audit: type=1334 audit(1768598334.428:458): prog-id=161 op=LOAD Jan 16 21:18:54.428000 audit: BPF prog-id=161 op=LOAD Jan 16 21:18:54.429000 audit: BPF prog-id=162 op=LOAD Jan 16 21:18:54.433688 kernel: audit: type=1334 audit(1768598334.429:459): prog-id=162 op=LOAD Jan 16 21:18:54.429000 audit[4112]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.438881 kernel: audit: type=1300 audit(1768598334.429:459): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.445106 kernel: audit: type=1327 audit(1768598334.429:459): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.429000 audit: BPF prog-id=162 op=UNLOAD Jan 16 21:18:54.447840 kernel: audit: type=1334 audit(1768598334.429:460): prog-id=162 op=UNLOAD Jan 16 21:18:54.429000 audit[4112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.453668 kernel: audit: type=1300 audit(1768598334.429:460): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.459969 kernel: audit: type=1327 audit(1768598334.429:460): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.462453 kernel: audit: type=1334 audit(1768598334.429:461): prog-id=163 op=LOAD Jan 16 21:18:54.429000 audit: BPF prog-id=163 op=LOAD Jan 16 21:18:54.429000 audit[4112]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.472942 kernel: audit: type=1300 audit(1768598334.429:461): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.429000 audit: BPF prog-id=164 op=LOAD Jan 16 21:18:54.429000 audit[4112]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.429000 audit: BPF prog-id=164 op=UNLOAD Jan 16 21:18:54.429000 audit[4112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.429000 audit: BPF prog-id=163 op=UNLOAD Jan 16 21:18:54.429000 audit[4112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.429000 audit: BPF prog-id=165 op=LOAD Jan 16 21:18:54.429000 audit[4112]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4101 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:54.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.477858 kernel: audit: type=1327 audit(1768598334.429:461): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863396264643139393763366439306434316138666663633137373838 Jan 16 21:18:54.779084 containerd[2539]: time="2026-01-16T21:18:54.779062584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9wdmt,Uid:6051bd58-3025-4c24-995c-805be42b574f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8c9bdd1997c6d90d41a8ffcc17788f72513c5accbe57f4631ffc57b7f3f78c3e\"" Jan 16 21:18:54.780139 containerd[2539]: time="2026-01-16T21:18:54.780120937Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 21:18:54.926563 containerd[2539]: time="2026-01-16T21:18:54.926544429Z" level=info msg="Container 35ce5d8d1e944ec41bc2fc0f4d3a3dc8587b95f73e70b5377e13ca353f2e6c25: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:18:54.928822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078390373.mount: Deactivated successfully. Jan 16 21:18:55.084695 containerd[2539]: time="2026-01-16T21:18:55.084509332Z" level=info msg="CreateContainer within sandbox \"fd090927fc96496e9eead2559c66bc6fe39a757552cbf67d09fa52747027059b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"35ce5d8d1e944ec41bc2fc0f4d3a3dc8587b95f73e70b5377e13ca353f2e6c25\"" Jan 16 21:18:55.085092 containerd[2539]: time="2026-01-16T21:18:55.085076629Z" level=info msg="StartContainer for \"35ce5d8d1e944ec41bc2fc0f4d3a3dc8587b95f73e70b5377e13ca353f2e6c25\"" Jan 16 21:18:55.086899 containerd[2539]: time="2026-01-16T21:18:55.086725420Z" level=info msg="connecting to shim 35ce5d8d1e944ec41bc2fc0f4d3a3dc8587b95f73e70b5377e13ca353f2e6c25" address="unix:///run/containerd/s/969a1f4bb4ea3f3be142a7987189ada2344035c841d7ea958d387532f07c46c1" protocol=ttrpc version=3 Jan 16 21:18:55.105032 systemd[1]: Started cri-containerd-35ce5d8d1e944ec41bc2fc0f4d3a3dc8587b95f73e70b5377e13ca353f2e6c25.scope - libcontainer container 35ce5d8d1e944ec41bc2fc0f4d3a3dc8587b95f73e70b5377e13ca353f2e6c25. Jan 16 21:18:55.147000 audit: BPF prog-id=166 op=LOAD Jan 16 21:18:55.147000 audit[4138]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4055 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335636535643864316539343465633431626332666330663464336133 Jan 16 21:18:55.147000 audit: BPF prog-id=167 op=LOAD Jan 16 21:18:55.147000 audit[4138]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4055 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335636535643864316539343465633431626332666330663464336133 Jan 16 21:18:55.148000 audit: BPF prog-id=167 op=UNLOAD Jan 16 21:18:55.148000 audit[4138]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4055 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335636535643864316539343465633431626332666330663464336133 Jan 16 21:18:55.148000 audit: BPF prog-id=166 op=UNLOAD Jan 16 21:18:55.148000 audit[4138]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4055 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335636535643864316539343465633431626332666330663464336133 Jan 16 21:18:55.148000 audit: BPF prog-id=168 op=LOAD Jan 16 21:18:55.148000 audit[4138]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4055 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335636535643864316539343465633431626332666330663464336133 Jan 16 21:18:55.167480 containerd[2539]: time="2026-01-16T21:18:55.167457062Z" level=info msg="StartContainer for \"35ce5d8d1e944ec41bc2fc0f4d3a3dc8587b95f73e70b5377e13ca353f2e6c25\" returns successfully" Jan 16 21:18:55.388000 audit[4201]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.388000 audit[4202]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=4202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.388000 audit[4202]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb9624d50 a2=0 a3=7ffcb9624d3c items=0 ppid=4151 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 21:18:55.388000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd3251fc0 a2=0 a3=7ffcd3251fac items=0 ppid=4151 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 21:18:55.391000 audit[4205]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_chain pid=4205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.391000 audit[4205]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffdb06740 a2=0 a3=7ffffdb0672c items=0 ppid=4151 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 21:18:55.392000 audit[4206]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.392000 audit[4206]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd672fe7b0 a2=0 a3=7ffd672fe79c items=0 ppid=4151 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 21:18:55.393000 audit[4208]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=4208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.393000 audit[4208]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9c0cabb0 a2=0 a3=7ffe9c0cab9c items=0 ppid=4151 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 21:18:55.393000 audit[4207]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=4207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.393000 audit[4207]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2410d170 a2=0 a3=7fff2410d15c items=0 ppid=4151 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 21:18:55.492000 audit[4209]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.492000 audit[4209]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff42172ba0 a2=0 a3=7fff42172b8c items=0 ppid=4151 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.492000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 21:18:55.495000 audit[4211]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.495000 audit[4211]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff78efe1c0 a2=0 a3=7fff78efe1ac items=0 ppid=4151 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 16 21:18:55.498000 audit[4214]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.498000 audit[4214]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcdf0f22f0 a2=0 a3=7ffcdf0f22dc items=0 ppid=4151 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 16 21:18:55.499000 audit[4215]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.499000 audit[4215]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6010e6f0 a2=0 a3=7ffc6010e6dc items=0 ppid=4151 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.499000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 21:18:55.501000 audit[4217]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.501000 audit[4217]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd48c1f120 a2=0 a3=7ffd48c1f10c items=0 ppid=4151 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 21:18:55.502000 audit[4218]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.502000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd7456d90 a2=0 a3=7ffcd7456d7c items=0 ppid=4151 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.502000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 21:18:55.504000 audit[4220]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.504000 audit[4220]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd69c276d0 a2=0 a3=7ffd69c276bc items=0 ppid=4151 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 21:18:55.507000 audit[4223]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.507000 audit[4223]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff8113a850 a2=0 a3=7fff8113a83c items=0 ppid=4151 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 16 21:18:55.508000 audit[4224]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.508000 audit[4224]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3353adc0 a2=0 a3=7fff3353adac items=0 ppid=4151 pid=4224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 21:18:55.510000 audit[4226]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.510000 audit[4226]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1d7f7270 a2=0 a3=7ffd1d7f725c items=0 ppid=4151 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 21:18:55.511000 audit[4227]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.511000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe970becf0 a2=0 a3=7ffe970becdc items=0 ppid=4151 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.511000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 21:18:55.513000 audit[4229]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.513000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc45da0430 a2=0 a3=7ffc45da041c items=0 ppid=4151 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 21:18:55.516000 audit[4232]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.516000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe962d3f80 a2=0 a3=7ffe962d3f6c items=0 ppid=4151 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 21:18:55.519000 audit[4235]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.519000 audit[4235]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe48bbc890 a2=0 a3=7ffe48bbc87c items=0 ppid=4151 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.519000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 21:18:55.520000 audit[4236]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4236 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.520000 audit[4236]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff51341d60 a2=0 a3=7fff51341d4c items=0 ppid=4151 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.520000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 21:18:55.523000 audit[4238]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.523000 audit[4238]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffd0b92540 a2=0 a3=7fffd0b9252c items=0 ppid=4151 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:18:55.527000 audit[4241]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.527000 audit[4241]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5538a6e0 a2=0 a3=7fff5538a6cc items=0 ppid=4151 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:18:55.528000 audit[4242]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.528000 audit[4242]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd99b41c80 a2=0 a3=7ffd99b41c6c items=0 ppid=4151 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.528000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 21:18:55.530000 audit[4244]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:55.530000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc002b2e70 a2=0 a3=7ffc002b2e5c items=0 ppid=4151 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.530000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 21:18:55.776000 audit[4250]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:18:55.776000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd83ac6070 a2=0 a3=7ffd83ac605c items=0 ppid=4151 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:18:55.788000 audit[4250]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:18:55.788000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd83ac6070 a2=0 a3=7ffd83ac605c items=0 ppid=4151 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:18:55.789000 audit[4255]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.789000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff7fd79c60 a2=0 a3=7fff7fd79c4c items=0 ppid=4151 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.789000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 21:18:55.792000 audit[4257]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.792000 audit[4257]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffea58b4a10 a2=0 a3=7ffea58b49fc items=0 ppid=4151 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 16 21:18:55.795000 audit[4260]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.795000 audit[4260]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc4d6b4dd0 a2=0 a3=7ffc4d6b4dbc items=0 ppid=4151 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 16 21:18:55.796000 audit[4261]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.796000 audit[4261]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd22648b10 a2=0 a3=7ffd22648afc items=0 ppid=4151 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 21:18:55.798000 audit[4263]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.798000 audit[4263]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd6f192120 a2=0 a3=7ffd6f19210c items=0 ppid=4151 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 21:18:55.799000 audit[4264]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.799000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8db14010 a2=0 a3=7ffd8db13ffc items=0 ppid=4151 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 21:18:55.801000 audit[4266]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.801000 audit[4266]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc3863cac0 a2=0 a3=7ffc3863caac items=0 ppid=4151 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 16 21:18:55.804000 audit[4269]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.804000 audit[4269]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd1e5a2390 a2=0 a3=7ffd1e5a237c items=0 ppid=4151 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 21:18:55.805000 audit[4270]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.805000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc79f3770 a2=0 a3=7ffdc79f375c items=0 ppid=4151 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 21:18:55.807000 audit[4272]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.807000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeac551eb0 a2=0 a3=7ffeac551e9c items=0 ppid=4151 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 21:18:55.808000 audit[4273]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.808000 audit[4273]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd26f259a0 a2=0 a3=7ffd26f2598c items=0 ppid=4151 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 21:18:55.810000 audit[4275]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.810000 audit[4275]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4a5e71e0 a2=0 a3=7fff4a5e71cc items=0 ppid=4151 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 21:18:55.813000 audit[4278]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.813000 audit[4278]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd294abe70 a2=0 a3=7ffd294abe5c items=0 ppid=4151 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 21:18:55.816000 audit[4281]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.816000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde9bd21b0 a2=0 a3=7ffde9bd219c items=0 ppid=4151 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 16 21:18:55.817000 audit[4282]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.817000 audit[4282]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc840bdac0 a2=0 a3=7ffc840bdaac items=0 ppid=4151 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 21:18:55.819000 audit[4284]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.819000 audit[4284]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffddab9c170 a2=0 a3=7ffddab9c15c items=0 ppid=4151 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:18:55.822000 audit[4287]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.822000 audit[4287]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd054261b0 a2=0 a3=7ffd0542619c items=0 ppid=4151 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:18:55.823000 audit[4288]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.823000 audit[4288]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff923b0c90 a2=0 a3=7fff923b0c7c items=0 ppid=4151 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.823000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 21:18:55.825000 audit[4290]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.825000 audit[4290]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd03ec4f60 a2=0 a3=7ffd03ec4f4c items=0 ppid=4151 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 21:18:55.826000 audit[4291]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4291 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.826000 audit[4291]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff89d18270 a2=0 a3=7fff89d1825c items=0 ppid=4151 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.826000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 21:18:55.828000 audit[4293]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.828000 audit[4293]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff737554e0 a2=0 a3=7fff737554cc items=0 ppid=4151 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:18:55.831000 audit[4296]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:55.831000 audit[4296]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe523d1c20 a2=0 a3=7ffe523d1c0c items=0 ppid=4151 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.831000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:18:55.833000 audit[4298]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 21:18:55.833000 audit[4298]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff7dba4470 a2=0 a3=7fff7dba445c items=0 ppid=4151 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.833000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:18:55.834000 audit[4298]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 21:18:55.834000 audit[4298]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff7dba4470 a2=0 a3=7fff7dba445c items=0 ppid=4151 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:55.834000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:18:58.074536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213222363.mount: Deactivated successfully. Jan 16 21:18:58.452369 containerd[2539]: time="2026-01-16T21:18:58.452310078Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:58.454692 containerd[2539]: time="2026-01-16T21:18:58.454622392Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23559564" Jan 16 21:18:58.457135 containerd[2539]: time="2026-01-16T21:18:58.457113552Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:58.460643 containerd[2539]: time="2026-01-16T21:18:58.460621958Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:58.461014 containerd[2539]: time="2026-01-16T21:18:58.460993527Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.680727083s" Jan 16 21:18:58.461057 containerd[2539]: time="2026-01-16T21:18:58.461019231Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 16 21:18:58.462903 containerd[2539]: time="2026-01-16T21:18:58.462872027Z" level=info msg="CreateContainer within sandbox \"8c9bdd1997c6d90d41a8ffcc17788f72513c5accbe57f4631ffc57b7f3f78c3e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 21:18:58.478871 containerd[2539]: time="2026-01-16T21:18:58.477224982Z" level=info msg="Container 751cd617b957ef7ab339aaa4649b2f81a4a70e145cb30761f96e9583fea6e19a: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:18:58.491054 containerd[2539]: time="2026-01-16T21:18:58.491032202Z" level=info msg="CreateContainer within sandbox \"8c9bdd1997c6d90d41a8ffcc17788f72513c5accbe57f4631ffc57b7f3f78c3e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"751cd617b957ef7ab339aaa4649b2f81a4a70e145cb30761f96e9583fea6e19a\"" Jan 16 21:18:58.491531 containerd[2539]: time="2026-01-16T21:18:58.491470193Z" level=info msg="StartContainer for \"751cd617b957ef7ab339aaa4649b2f81a4a70e145cb30761f96e9583fea6e19a\"" Jan 16 21:18:58.492349 containerd[2539]: time="2026-01-16T21:18:58.492314595Z" level=info msg="connecting to shim 751cd617b957ef7ab339aaa4649b2f81a4a70e145cb30761f96e9583fea6e19a" address="unix:///run/containerd/s/b5d92760be424ab7f923d1c9d8cddaed8d5a4026c676fc12ad53454ea8ece3cc" protocol=ttrpc version=3 Jan 16 21:18:58.511978 systemd[1]: Started cri-containerd-751cd617b957ef7ab339aaa4649b2f81a4a70e145cb30761f96e9583fea6e19a.scope - libcontainer container 751cd617b957ef7ab339aaa4649b2f81a4a70e145cb30761f96e9583fea6e19a. Jan 16 21:18:58.518000 audit: BPF prog-id=169 op=LOAD Jan 16 21:18:58.518000 audit: BPF prog-id=170 op=LOAD Jan 16 21:18:58.518000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4101 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:58.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735316364363137623935376566376162333339616161343634396232 Jan 16 21:18:58.519000 audit: BPF prog-id=170 op=UNLOAD Jan 16 21:18:58.519000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:58.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735316364363137623935376566376162333339616161343634396232 Jan 16 21:18:58.519000 audit: BPF prog-id=171 op=LOAD Jan 16 21:18:58.519000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4101 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:58.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735316364363137623935376566376162333339616161343634396232 Jan 16 21:18:58.519000 audit: BPF prog-id=172 op=LOAD Jan 16 21:18:58.519000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4101 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:58.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735316364363137623935376566376162333339616161343634396232 Jan 16 21:18:58.519000 audit: BPF prog-id=172 op=UNLOAD Jan 16 21:18:58.519000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:58.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735316364363137623935376566376162333339616161343634396232 Jan 16 21:18:58.519000 audit: BPF prog-id=171 op=UNLOAD Jan 16 21:18:58.519000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:58.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735316364363137623935376566376162333339616161343634396232 Jan 16 21:18:58.519000 audit: BPF prog-id=173 op=LOAD Jan 16 21:18:58.519000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4101 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:58.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735316364363137623935376566376162333339616161343634396232 Jan 16 21:18:58.534697 containerd[2539]: time="2026-01-16T21:18:58.534666161Z" level=info msg="StartContainer for \"751cd617b957ef7ab339aaa4649b2f81a4a70e145cb30761f96e9583fea6e19a\" returns successfully" Jan 16 21:18:59.296580 kubelet[3994]: I0116 21:18:59.296525 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6lnsm" podStartSLOduration=13.296509621 podStartE2EDuration="13.296509621s" podCreationTimestamp="2026-01-16 21:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:18:55.289581415 +0000 UTC m=+9.137068331" watchObservedRunningTime="2026-01-16 21:18:59.296509621 +0000 UTC m=+13.143996541" Jan 16 21:18:59.297194 kubelet[3994]: I0116 21:18:59.297021 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9wdmt" podStartSLOduration=8.615162187 podStartE2EDuration="12.296994626s" podCreationTimestamp="2026-01-16 21:18:47 +0000 UTC" firstStartedPulling="2026-01-16 21:18:54.77980753 +0000 UTC m=+8.627294446" lastFinishedPulling="2026-01-16 21:18:58.46163997 +0000 UTC m=+12.309126885" observedRunningTime="2026-01-16 21:18:59.296370076 +0000 UTC m=+13.143856994" watchObservedRunningTime="2026-01-16 21:18:59.296994626 +0000 UTC m=+13.144481543" Jan 16 21:19:03.664222 sudo[2991]: pam_unix(sudo:session): session closed for user root Jan 16 21:19:03.673518 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 16 21:19:03.673608 kernel: audit: type=1106 audit(1768598343.663:530): pid=2991 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:19:03.663000 audit[2991]: USER_END pid=2991 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:19:03.663000 audit[2991]: CRED_DISP pid=2991 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:19:03.679863 kernel: audit: type=1104 audit(1768598343.663:531): pid=2991 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:19:03.776995 sshd[2990]: Connection closed by 10.200.16.10 port 49524 Jan 16 21:19:03.776886 sshd-session[2986]: pam_unix(sshd:session): session closed for user core Jan 16 21:19:03.777000 audit[2986]: USER_END pid=2986 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:19:03.787882 kernel: audit: type=1106 audit(1768598343.777:532): pid=2986 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:19:03.788309 systemd-logind[2504]: Session 10 logged out. Waiting for processes to exit. Jan 16 21:19:03.789874 systemd[1]: sshd@6-10.200.8.41:22-10.200.16.10:49524.service: Deactivated successfully. Jan 16 21:19:03.791608 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 21:19:03.791791 systemd[1]: session-10.scope: Consumed 2.724s CPU time, 228.9M memory peak. Jan 16 21:19:03.796949 systemd-logind[2504]: Removed session 10. Jan 16 21:19:03.777000 audit[2986]: CRED_DISP pid=2986 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:19:03.804544 kernel: audit: type=1104 audit(1768598343.777:533): pid=2986 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:19:03.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.41:22-10.200.16.10:49524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:03.812849 kernel: audit: type=1131 audit(1768598343.789:534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.41:22-10.200.16.10:49524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:04.587000 audit[4385]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:04.593876 kernel: audit: type=1325 audit(1768598344.587:535): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:04.587000 audit[4385]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffde5231100 a2=0 a3=7ffde52310ec items=0 ppid=4151 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.602788 kernel: audit: type=1300 audit(1768598344.587:535): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffde5231100 a2=0 a3=7ffde52310ec items=0 ppid=4151 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:04.609038 kernel: audit: type=1327 audit(1768598344.587:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:04.593000 audit[4385]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:04.621108 kernel: audit: type=1325 audit(1768598344.593:536): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:04.621208 kernel: audit: type=1300 audit(1768598344.593:536): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde5231100 a2=0 a3=0 items=0 ppid=4151 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.593000 audit[4385]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde5231100 a2=0 a3=0 items=0 ppid=4151 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.593000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:04.612000 audit[4387]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:04.612000 audit[4387]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd3ea51de0 a2=0 a3=7ffd3ea51dcc items=0 ppid=4151 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.612000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:04.631000 audit[4387]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:04.631000 audit[4387]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3ea51de0 a2=0 a3=0 items=0 ppid=4151 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:06.475000 audit[4389]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:06.475000 audit[4389]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe9f591ce0 a2=0 a3=7ffe9f591ccc items=0 ppid=4151 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:06.500000 audit[4389]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:06.500000 audit[4389]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe9f591ce0 a2=0 a3=0 items=0 ppid=4151 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:06.672000 audit[4391]: NETFILTER_CFG table=filter:114 family=2 entries=19 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:06.672000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc5f015540 a2=0 a3=7ffc5f01552c items=0 ppid=4151 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:06.677000 audit[4391]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:06.677000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5f015540 a2=0 a3=0 items=0 ppid=4151 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:07.690000 audit[4393]: NETFILTER_CFG table=filter:116 family=2 entries=20 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:07.690000 audit[4393]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb14589b0 a2=0 a3=7ffcb145899c items=0 ppid=4151 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:07.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:07.696000 audit[4393]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:07.696000 audit[4393]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb14589b0 a2=0 a3=0 items=0 ppid=4151 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:07.696000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:08.053710 systemd[1]: Created slice kubepods-besteffort-pod3778c240_f429_45e1_b733_3fd9b70131f4.slice - libcontainer container kubepods-besteffort-pod3778c240_f429_45e1_b733_3fd9b70131f4.slice. Jan 16 21:19:08.071773 kubelet[3994]: I0116 21:19:08.071739 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclgv\" (UniqueName: \"kubernetes.io/projected/3778c240-f429-45e1-b733-3fd9b70131f4-kube-api-access-cclgv\") pod \"calico-typha-5dc9767776-5kbxv\" (UID: \"3778c240-f429-45e1-b733-3fd9b70131f4\") " pod="calico-system/calico-typha-5dc9767776-5kbxv" Jan 16 21:19:08.072044 kubelet[3994]: I0116 21:19:08.071785 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3778c240-f429-45e1-b733-3fd9b70131f4-tigera-ca-bundle\") pod \"calico-typha-5dc9767776-5kbxv\" (UID: \"3778c240-f429-45e1-b733-3fd9b70131f4\") " pod="calico-system/calico-typha-5dc9767776-5kbxv" Jan 16 21:19:08.072044 kubelet[3994]: I0116 21:19:08.071803 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3778c240-f429-45e1-b733-3fd9b70131f4-typha-certs\") pod \"calico-typha-5dc9767776-5kbxv\" (UID: \"3778c240-f429-45e1-b733-3fd9b70131f4\") " pod="calico-system/calico-typha-5dc9767776-5kbxv" Jan 16 21:19:08.231909 systemd[1]: Created slice kubepods-besteffort-poda01bc629_bbe1_46e6_b7e4_da2a0f8876a6.slice - libcontainer container kubepods-besteffort-poda01bc629_bbe1_46e6_b7e4_da2a0f8876a6.slice. Jan 16 21:19:08.273265 kubelet[3994]: I0116 21:19:08.273164 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-var-run-calico\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273265 kubelet[3994]: I0116 21:19:08.273223 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-xtables-lock\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273265 kubelet[3994]: I0116 21:19:08.273265 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-cni-log-dir\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273477 kubelet[3994]: I0116 21:19:08.273279 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-policysync\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273477 kubelet[3994]: I0116 21:19:08.273295 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-node-certs\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273477 kubelet[3994]: I0116 21:19:08.273308 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-tigera-ca-bundle\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273477 kubelet[3994]: I0116 21:19:08.273322 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-cni-bin-dir\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273477 kubelet[3994]: I0116 21:19:08.273337 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p6qf\" (UniqueName: \"kubernetes.io/projected/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-kube-api-access-5p6qf\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273556 kubelet[3994]: I0116 21:19:08.273356 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-var-lib-calico\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273556 kubelet[3994]: I0116 21:19:08.273372 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-flexvol-driver-host\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273556 kubelet[3994]: I0116 21:19:08.273387 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-lib-modules\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.273556 kubelet[3994]: I0116 21:19:08.273401 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a01bc629-bbe1-46e6-b7e4-da2a0f8876a6-cni-net-dir\") pod \"calico-node-t8zsr\" (UID: \"a01bc629-bbe1-46e6-b7e4-da2a0f8876a6\") " pod="calico-system/calico-node-t8zsr" Jan 16 21:19:08.357404 containerd[2539]: time="2026-01-16T21:19:08.357225584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dc9767776-5kbxv,Uid:3778c240-f429-45e1-b733-3fd9b70131f4,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:08.379451 kubelet[3994]: E0116 21:19:08.379367 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.379451 kubelet[3994]: W0116 21:19:08.379383 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.379451 kubelet[3994]: E0116 21:19:08.379411 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.391184 kubelet[3994]: E0116 21:19:08.391149 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.391184 kubelet[3994]: W0116 21:19:08.391164 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.391296 kubelet[3994]: E0116 21:19:08.391271 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.407398 containerd[2539]: time="2026-01-16T21:19:08.406800870Z" level=info msg="connecting to shim c6259cce3879853b441c491680903fed35310d2d34aed512044894520a7df762" address="unix:///run/containerd/s/d243956affda95606f9472dc9757e09e2e131c2f73d5cce2782fa5a5d89f789c" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:08.428722 kubelet[3994]: E0116 21:19:08.428691 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:08.448012 systemd[1]: Started cri-containerd-c6259cce3879853b441c491680903fed35310d2d34aed512044894520a7df762.scope - libcontainer container c6259cce3879853b441c491680903fed35310d2d34aed512044894520a7df762. Jan 16 21:19:08.463596 kubelet[3994]: E0116 21:19:08.463567 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.463596 kubelet[3994]: W0116 21:19:08.463596 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.463682 kubelet[3994]: E0116 21:19:08.463608 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.463879 kubelet[3994]: E0116 21:19:08.463727 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.463909 kubelet[3994]: W0116 21:19:08.463880 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.463909 kubelet[3994]: E0116 21:19:08.463891 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.464146 kubelet[3994]: E0116 21:19:08.464130 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.464146 kubelet[3994]: W0116 21:19:08.464140 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.464205 kubelet[3994]: E0116 21:19:08.464150 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.464445 kubelet[3994]: E0116 21:19:08.464435 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.464481 kubelet[3994]: W0116 21:19:08.464446 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.464481 kubelet[3994]: E0116 21:19:08.464455 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.464959 kubelet[3994]: E0116 21:19:08.464945 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.464959 kubelet[3994]: W0116 21:19:08.464959 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.465149 kubelet[3994]: E0116 21:19:08.464975 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.465188 kubelet[3994]: E0116 21:19:08.465176 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.465188 kubelet[3994]: W0116 21:19:08.465184 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.465242 kubelet[3994]: E0116 21:19:08.465193 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.465321 kubelet[3994]: E0116 21:19:08.465309 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.465321 kubelet[3994]: W0116 21:19:08.465320 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.465415 kubelet[3994]: E0116 21:19:08.465327 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.465447 kubelet[3994]: E0116 21:19:08.465431 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.465447 kubelet[3994]: W0116 21:19:08.465436 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.465447 kubelet[3994]: E0116 21:19:08.465443 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.465900 kubelet[3994]: E0116 21:19:08.465884 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.466056 kubelet[3994]: W0116 21:19:08.465957 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.466056 kubelet[3994]: E0116 21:19:08.465975 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.466261 kubelet[3994]: E0116 21:19:08.466216 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.466261 kubelet[3994]: W0116 21:19:08.466223 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.466261 kubelet[3994]: E0116 21:19:08.466231 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.466494 kubelet[3994]: E0116 21:19:08.466459 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.466494 kubelet[3994]: W0116 21:19:08.466466 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.466494 kubelet[3994]: E0116 21:19:08.466474 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.466730 kubelet[3994]: E0116 21:19:08.466710 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.466730 kubelet[3994]: W0116 21:19:08.466721 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.466730 kubelet[3994]: E0116 21:19:08.466730 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.466994 kubelet[3994]: E0116 21:19:08.466984 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.466994 kubelet[3994]: W0116 21:19:08.466995 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.467050 kubelet[3994]: E0116 21:19:08.467005 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.467161 kubelet[3994]: E0116 21:19:08.467110 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.467161 kubelet[3994]: W0116 21:19:08.467149 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.467161 kubelet[3994]: E0116 21:19:08.467156 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.467387 kubelet[3994]: E0116 21:19:08.467325 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.467387 kubelet[3994]: W0116 21:19:08.467332 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.467387 kubelet[3994]: E0116 21:19:08.467340 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.467582 kubelet[3994]: E0116 21:19:08.467535 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.467582 kubelet[3994]: W0116 21:19:08.467541 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.467582 kubelet[3994]: E0116 21:19:08.467548 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.467797 kubelet[3994]: E0116 21:19:08.467762 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.467797 kubelet[3994]: W0116 21:19:08.467769 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.467797 kubelet[3994]: E0116 21:19:08.467777 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.468198 kubelet[3994]: E0116 21:19:08.468185 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.468198 kubelet[3994]: W0116 21:19:08.468198 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.468308 kubelet[3994]: E0116 21:19:08.468209 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.468356 kubelet[3994]: E0116 21:19:08.468310 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.468356 kubelet[3994]: W0116 21:19:08.468315 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.468356 kubelet[3994]: E0116 21:19:08.468322 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.468447 kubelet[3994]: E0116 21:19:08.468438 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.468447 kubelet[3994]: W0116 21:19:08.468443 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.468483 kubelet[3994]: E0116 21:19:08.468450 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.470000 audit: BPF prog-id=174 op=LOAD Jan 16 21:19:08.470000 audit: BPF prog-id=175 op=LOAD Jan 16 21:19:08.470000 audit[4420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4409 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323539636365333837393835336234343163343931363830393033 Jan 16 21:19:08.470000 audit: BPF prog-id=175 op=UNLOAD Jan 16 21:19:08.470000 audit[4420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4409 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323539636365333837393835336234343163343931363830393033 Jan 16 21:19:08.470000 audit: BPF prog-id=176 op=LOAD Jan 16 21:19:08.470000 audit[4420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4409 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323539636365333837393835336234343163343931363830393033 Jan 16 21:19:08.471000 audit: BPF prog-id=177 op=LOAD Jan 16 21:19:08.471000 audit[4420]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4409 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323539636365333837393835336234343163343931363830393033 Jan 16 21:19:08.471000 audit: BPF prog-id=177 op=UNLOAD Jan 16 21:19:08.471000 audit[4420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4409 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323539636365333837393835336234343163343931363830393033 Jan 16 21:19:08.471000 audit: BPF prog-id=176 op=UNLOAD Jan 16 21:19:08.471000 audit[4420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4409 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323539636365333837393835336234343163343931363830393033 Jan 16 21:19:08.471000 audit: BPF prog-id=178 op=LOAD Jan 16 21:19:08.471000 audit[4420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4409 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323539636365333837393835336234343163343931363830393033 Jan 16 21:19:08.474706 kubelet[3994]: E0116 21:19:08.474690 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.474807 kubelet[3994]: W0116 21:19:08.474747 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.474807 kubelet[3994]: E0116 21:19:08.474759 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.475017 kubelet[3994]: E0116 21:19:08.474949 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.475017 kubelet[3994]: W0116 21:19:08.474956 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.475017 kubelet[3994]: E0116 21:19:08.474964 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.475196 kubelet[3994]: I0116 21:19:08.475150 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjs4f\" (UniqueName: \"kubernetes.io/projected/ad5a70af-916a-4e95-9866-1f1c8f4329d0-kube-api-access-rjs4f\") pod \"csi-node-driver-7pf8t\" (UID: \"ad5a70af-916a-4e95-9866-1f1c8f4329d0\") " pod="calico-system/csi-node-driver-7pf8t" Jan 16 21:19:08.475284 kubelet[3994]: E0116 21:19:08.475247 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.475284 kubelet[3994]: W0116 21:19:08.475252 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.475284 kubelet[3994]: E0116 21:19:08.475260 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.475501 kubelet[3994]: E0116 21:19:08.475493 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.475562 kubelet[3994]: W0116 21:19:08.475528 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.475562 kubelet[3994]: E0116 21:19:08.475536 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.475725 kubelet[3994]: I0116 21:19:08.475623 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad5a70af-916a-4e95-9866-1f1c8f4329d0-kubelet-dir\") pod \"csi-node-driver-7pf8t\" (UID: \"ad5a70af-916a-4e95-9866-1f1c8f4329d0\") " pod="calico-system/csi-node-driver-7pf8t" Jan 16 21:19:08.475816 kubelet[3994]: E0116 21:19:08.475801 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.475816 kubelet[3994]: W0116 21:19:08.475808 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.475961 kubelet[3994]: E0116 21:19:08.475854 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.475961 kubelet[3994]: I0116 21:19:08.475869 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ad5a70af-916a-4e95-9866-1f1c8f4329d0-registration-dir\") pod \"csi-node-driver-7pf8t\" (UID: \"ad5a70af-916a-4e95-9866-1f1c8f4329d0\") " pod="calico-system/csi-node-driver-7pf8t" Jan 16 21:19:08.476148 kubelet[3994]: E0116 21:19:08.476120 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.476148 kubelet[3994]: W0116 21:19:08.476128 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.476360 kubelet[3994]: E0116 21:19:08.476223 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.476360 kubelet[3994]: I0116 21:19:08.476236 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ad5a70af-916a-4e95-9866-1f1c8f4329d0-socket-dir\") pod \"csi-node-driver-7pf8t\" (UID: \"ad5a70af-916a-4e95-9866-1f1c8f4329d0\") " pod="calico-system/csi-node-driver-7pf8t" Jan 16 21:19:08.476888 kubelet[3994]: E0116 21:19:08.476560 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.476972 kubelet[3994]: W0116 21:19:08.476959 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.477030 kubelet[3994]: E0116 21:19:08.477022 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.477087 kubelet[3994]: I0116 21:19:08.477080 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ad5a70af-916a-4e95-9866-1f1c8f4329d0-varrun\") pod \"csi-node-driver-7pf8t\" (UID: \"ad5a70af-916a-4e95-9866-1f1c8f4329d0\") " pod="calico-system/csi-node-driver-7pf8t" Jan 16 21:19:08.477374 kubelet[3994]: E0116 21:19:08.477303 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.477374 kubelet[3994]: W0116 21:19:08.477310 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.477688 kubelet[3994]: E0116 21:19:08.477593 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.477910 kubelet[3994]: E0116 21:19:08.477867 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.477910 kubelet[3994]: W0116 21:19:08.477899 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.478308 kubelet[3994]: E0116 21:19:08.478284 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.478431 kubelet[3994]: E0116 21:19:08.478409 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.478534 kubelet[3994]: W0116 21:19:08.478520 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.478729 kubelet[3994]: E0116 21:19:08.478715 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.479960 kubelet[3994]: E0116 21:19:08.479804 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.479960 kubelet[3994]: W0116 21:19:08.479817 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.480341 kubelet[3994]: E0116 21:19:08.480124 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.480716 kubelet[3994]: E0116 21:19:08.480500 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.480820 kubelet[3994]: W0116 21:19:08.480774 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.480931 kubelet[3994]: E0116 21:19:08.480861 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.481071 kubelet[3994]: E0116 21:19:08.481064 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.481292 kubelet[3994]: W0116 21:19:08.481266 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.481292 kubelet[3994]: E0116 21:19:08.481280 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.481782 kubelet[3994]: E0116 21:19:08.481750 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.481782 kubelet[3994]: W0116 21:19:08.481760 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.481782 kubelet[3994]: E0116 21:19:08.481771 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.482195 kubelet[3994]: E0116 21:19:08.482145 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.482240 kubelet[3994]: W0116 21:19:08.482233 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.482281 kubelet[3994]: E0116 21:19:08.482275 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.499853 containerd[2539]: time="2026-01-16T21:19:08.499811054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dc9767776-5kbxv,Uid:3778c240-f429-45e1-b733-3fd9b70131f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6259cce3879853b441c491680903fed35310d2d34aed512044894520a7df762\"" Jan 16 21:19:08.501395 containerd[2539]: time="2026-01-16T21:19:08.501268792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 21:19:08.535949 containerd[2539]: time="2026-01-16T21:19:08.535927167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8zsr,Uid:a01bc629-bbe1-46e6-b7e4-da2a0f8876a6,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:08.577491 containerd[2539]: time="2026-01-16T21:19:08.577445969Z" level=info msg="connecting to shim 76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c" address="unix:///run/containerd/s/e6a190a3bb24c25561ff53b7d2260f1269d8cedd87a7343a17943cb83e433797" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:08.577867 kubelet[3994]: E0116 21:19:08.577584 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.577867 kubelet[3994]: W0116 21:19:08.577595 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.577867 kubelet[3994]: E0116 21:19:08.577607 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.578304 kubelet[3994]: E0116 21:19:08.578165 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.578304 kubelet[3994]: W0116 21:19:08.578217 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.578304 kubelet[3994]: E0116 21:19:08.578233 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.578800 kubelet[3994]: E0116 21:19:08.578772 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.578800 kubelet[3994]: W0116 21:19:08.578784 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.578971 kubelet[3994]: E0116 21:19:08.578858 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.579196 kubelet[3994]: E0116 21:19:08.579178 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.579196 kubelet[3994]: W0116 21:19:08.579186 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.579322 kubelet[3994]: E0116 21:19:08.579244 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.579485 kubelet[3994]: E0116 21:19:08.579469 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.579485 kubelet[3994]: W0116 21:19:08.579476 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.579646 kubelet[3994]: E0116 21:19:08.579616 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.579914 kubelet[3994]: E0116 21:19:08.579901 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.579914 kubelet[3994]: W0116 21:19:08.579912 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.580040 kubelet[3994]: E0116 21:19:08.580027 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.580119 kubelet[3994]: E0116 21:19:08.580109 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.580119 kubelet[3994]: W0116 21:19:08.580117 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.580253 kubelet[3994]: E0116 21:19:08.580242 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.580344 kubelet[3994]: E0116 21:19:08.580338 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.580370 kubelet[3994]: W0116 21:19:08.580345 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.580419 kubelet[3994]: E0116 21:19:08.580407 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.580884 kubelet[3994]: E0116 21:19:08.580616 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.580884 kubelet[3994]: W0116 21:19:08.580624 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.580884 kubelet[3994]: E0116 21:19:08.580634 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.580884 kubelet[3994]: E0116 21:19:08.580763 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.580884 kubelet[3994]: W0116 21:19:08.580769 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.580884 kubelet[3994]: E0116 21:19:08.580776 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.581050 kubelet[3994]: E0116 21:19:08.581002 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.581050 kubelet[3994]: W0116 21:19:08.581009 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.581091 kubelet[3994]: E0116 21:19:08.581083 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.581311 kubelet[3994]: E0116 21:19:08.581300 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.581311 kubelet[3994]: W0116 21:19:08.581310 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.581377 kubelet[3994]: E0116 21:19:08.581369 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.581492 kubelet[3994]: E0116 21:19:08.581485 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.581492 kubelet[3994]: W0116 21:19:08.581492 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.581573 kubelet[3994]: E0116 21:19:08.581565 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.581615 kubelet[3994]: E0116 21:19:08.581609 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.581639 kubelet[3994]: W0116 21:19:08.581616 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.581693 kubelet[3994]: E0116 21:19:08.581684 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.581723 kubelet[3994]: E0116 21:19:08.581717 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.581744 kubelet[3994]: W0116 21:19:08.581724 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.581806 kubelet[3994]: E0116 21:19:08.581797 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.581965 kubelet[3994]: E0116 21:19:08.581951 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.581965 kubelet[3994]: W0116 21:19:08.581959 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.582161 kubelet[3994]: E0116 21:19:08.582030 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.582392 kubelet[3994]: E0116 21:19:08.582375 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.582392 kubelet[3994]: W0116 21:19:08.582387 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.582612 kubelet[3994]: E0116 21:19:08.582488 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.582706 kubelet[3994]: E0116 21:19:08.582658 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.582706 kubelet[3994]: W0116 21:19:08.582666 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.582706 kubelet[3994]: E0116 21:19:08.582675 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.583546 kubelet[3994]: E0116 21:19:08.582894 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.583546 kubelet[3994]: W0116 21:19:08.582903 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.583546 kubelet[3994]: E0116 21:19:08.582916 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.583674 kubelet[3994]: E0116 21:19:08.583563 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.583674 kubelet[3994]: W0116 21:19:08.583574 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.583773 kubelet[3994]: E0116 21:19:08.583760 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.584114 kubelet[3994]: E0116 21:19:08.584097 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.584114 kubelet[3994]: W0116 21:19:08.584113 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.584249 kubelet[3994]: E0116 21:19:08.584141 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.584432 kubelet[3994]: E0116 21:19:08.584418 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.584432 kubelet[3994]: W0116 21:19:08.584432 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.584818 kubelet[3994]: E0116 21:19:08.584649 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.584818 kubelet[3994]: W0116 21:19:08.584661 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.584818 kubelet[3994]: E0116 21:19:08.584668 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.584818 kubelet[3994]: E0116 21:19:08.584702 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.584953 kubelet[3994]: E0116 21:19:08.584873 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.584953 kubelet[3994]: W0116 21:19:08.584880 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.584953 kubelet[3994]: E0116 21:19:08.584897 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.585349 kubelet[3994]: E0116 21:19:08.585110 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.585349 kubelet[3994]: W0116 21:19:08.585118 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.585349 kubelet[3994]: E0116 21:19:08.585127 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.592789 kubelet[3994]: E0116 21:19:08.592740 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:08.592789 kubelet[3994]: W0116 21:19:08.592752 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:08.592789 kubelet[3994]: E0116 21:19:08.592773 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:08.602000 systemd[1]: Started cri-containerd-76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c.scope - libcontainer container 76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c. Jan 16 21:19:08.606000 audit: BPF prog-id=179 op=LOAD Jan 16 21:19:08.606000 audit: BPF prog-id=180 op=LOAD Jan 16 21:19:08.606000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4500 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643138333466323631326366333833353632616430616364663566 Jan 16 21:19:08.607000 audit: BPF prog-id=180 op=UNLOAD Jan 16 21:19:08.607000 audit[4536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643138333466323631326366333833353632616430616364663566 Jan 16 21:19:08.607000 audit: BPF prog-id=181 op=LOAD Jan 16 21:19:08.607000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4500 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643138333466323631326366333833353632616430616364663566 Jan 16 21:19:08.607000 audit: BPF prog-id=182 op=LOAD Jan 16 21:19:08.607000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4500 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643138333466323631326366333833353632616430616364663566 Jan 16 21:19:08.607000 audit: BPF prog-id=182 op=UNLOAD Jan 16 21:19:08.607000 audit[4536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643138333466323631326366333833353632616430616364663566 Jan 16 21:19:08.607000 audit: BPF prog-id=181 op=UNLOAD Jan 16 21:19:08.607000 audit[4536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643138333466323631326366333833353632616430616364663566 Jan 16 21:19:08.607000 audit: BPF prog-id=183 op=LOAD Jan 16 21:19:08.607000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4500 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643138333466323631326366333833353632616430616364663566 Jan 16 21:19:08.622725 containerd[2539]: time="2026-01-16T21:19:08.622708332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8zsr,Uid:a01bc629-bbe1-46e6-b7e4-da2a0f8876a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c\"" Jan 16 21:19:08.709000 audit[4565]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:08.712475 kernel: kauditd_printk_skb: 69 callbacks suppressed Jan 16 21:19:08.712522 kernel: audit: type=1325 audit(1768598348.709:561): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:08.709000 audit[4565]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe435efea0 a2=0 a3=7ffe435efe8c items=0 ppid=4151 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:08.723997 kernel: audit: type=1300 audit(1768598348.709:561): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe435efea0 a2=0 a3=7ffe435efe8c items=0 ppid=4151 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.724040 kernel: audit: type=1327 audit(1768598348.709:561): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:08.714000 audit[4565]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:08.727217 kernel: audit: type=1325 audit(1768598348.714:562): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:08.714000 audit[4565]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe435efea0 a2=0 a3=0 items=0 ppid=4151 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.732313 kernel: audit: type=1300 audit(1768598348.714:562): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe435efea0 a2=0 a3=0 items=0 ppid=4151 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:08.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:08.735228 kernel: audit: type=1327 audit(1768598348.714:562): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:09.660377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173856126.mount: Deactivated successfully. Jan 16 21:19:10.251729 kubelet[3994]: E0116 21:19:10.251702 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:10.302111 containerd[2539]: time="2026-01-16T21:19:10.302083659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:10.304000 containerd[2539]: time="2026-01-16T21:19:10.303939138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 16 21:19:10.306145 containerd[2539]: time="2026-01-16T21:19:10.306124523Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:10.308899 containerd[2539]: time="2026-01-16T21:19:10.308858744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:10.309492 containerd[2539]: time="2026-01-16T21:19:10.309223283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.807820132s" Jan 16 21:19:10.309492 containerd[2539]: time="2026-01-16T21:19:10.309245256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 16 21:19:10.309852 containerd[2539]: time="2026-01-16T21:19:10.309825639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 21:19:10.320015 containerd[2539]: time="2026-01-16T21:19:10.319995030Z" level=info msg="CreateContainer within sandbox \"c6259cce3879853b441c491680903fed35310d2d34aed512044894520a7df762\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 21:19:10.336508 containerd[2539]: time="2026-01-16T21:19:10.336102709Z" level=info msg="Container cb9fea910f7525e2ffdd0d74754f376b9edb79f251b0349928a11a4475c98f01: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:19:10.349967 containerd[2539]: time="2026-01-16T21:19:10.349948248Z" level=info msg="CreateContainer within sandbox \"c6259cce3879853b441c491680903fed35310d2d34aed512044894520a7df762\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cb9fea910f7525e2ffdd0d74754f376b9edb79f251b0349928a11a4475c98f01\"" Jan 16 21:19:10.350217 containerd[2539]: time="2026-01-16T21:19:10.350202727Z" level=info msg="StartContainer for \"cb9fea910f7525e2ffdd0d74754f376b9edb79f251b0349928a11a4475c98f01\"" Jan 16 21:19:10.351293 containerd[2539]: time="2026-01-16T21:19:10.351271058Z" level=info msg="connecting to shim cb9fea910f7525e2ffdd0d74754f376b9edb79f251b0349928a11a4475c98f01" address="unix:///run/containerd/s/d243956affda95606f9472dc9757e09e2e131c2f73d5cce2782fa5a5d89f789c" protocol=ttrpc version=3 Jan 16 21:19:10.366984 systemd[1]: Started cri-containerd-cb9fea910f7525e2ffdd0d74754f376b9edb79f251b0349928a11a4475c98f01.scope - libcontainer container cb9fea910f7525e2ffdd0d74754f376b9edb79f251b0349928a11a4475c98f01. Jan 16 21:19:10.375000 audit: BPF prog-id=184 op=LOAD Jan 16 21:19:10.375000 audit: BPF prog-id=185 op=LOAD Jan 16 21:19:10.379225 kernel: audit: type=1334 audit(1768598350.375:563): prog-id=184 op=LOAD Jan 16 21:19:10.379285 kernel: audit: type=1334 audit(1768598350.375:564): prog-id=185 op=LOAD Jan 16 21:19:10.375000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4409 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:10.392199 kernel: audit: type=1300 audit(1768598350.375:564): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4409 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:10.392249 kernel: audit: type=1327 audit(1768598350.375:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362396665613931306637353235653266666464306437343735346633 Jan 16 21:19:10.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362396665613931306637353235653266666464306437343735346633 Jan 16 21:19:10.375000 audit: BPF prog-id=185 op=UNLOAD Jan 16 21:19:10.375000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4409 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:10.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362396665613931306637353235653266666464306437343735346633 Jan 16 21:19:10.375000 audit: BPF prog-id=186 op=LOAD Jan 16 21:19:10.375000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4409 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:10.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362396665613931306637353235653266666464306437343735346633 Jan 16 21:19:10.375000 audit: BPF prog-id=187 op=LOAD Jan 16 21:19:10.375000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4409 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:10.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362396665613931306637353235653266666464306437343735346633 Jan 16 21:19:10.375000 audit: BPF prog-id=187 op=UNLOAD Jan 16 21:19:10.375000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4409 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:10.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362396665613931306637353235653266666464306437343735346633 Jan 16 21:19:10.375000 audit: BPF prog-id=186 op=UNLOAD Jan 16 21:19:10.375000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4409 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:10.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362396665613931306637353235653266666464306437343735346633 Jan 16 21:19:10.375000 audit: BPF prog-id=188 op=LOAD Jan 16 21:19:10.375000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4409 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:10.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362396665613931306637353235653266666464306437343735346633 Jan 16 21:19:10.418082 containerd[2539]: time="2026-01-16T21:19:10.418011661Z" level=info msg="StartContainer for \"cb9fea910f7525e2ffdd0d74754f376b9edb79f251b0349928a11a4475c98f01\" returns successfully" Jan 16 21:19:11.321288 kubelet[3994]: I0116 21:19:11.320730 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5dc9767776-5kbxv" podStartSLOduration=1.5119006449999999 podStartE2EDuration="3.320712912s" podCreationTimestamp="2026-01-16 21:19:08 +0000 UTC" firstStartedPulling="2026-01-16 21:19:08.500909645 +0000 UTC m=+22.348396561" lastFinishedPulling="2026-01-16 21:19:10.309721902 +0000 UTC m=+24.157208828" observedRunningTime="2026-01-16 21:19:11.320442374 +0000 UTC m=+25.167929290" watchObservedRunningTime="2026-01-16 21:19:11.320712912 +0000 UTC m=+25.168199822" Jan 16 21:19:11.391807 kubelet[3994]: E0116 21:19:11.391793 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.391998 kubelet[3994]: W0116 21:19:11.391854 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.391998 kubelet[3994]: E0116 21:19:11.391869 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.392275 kubelet[3994]: E0116 21:19:11.392268 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.392357 kubelet[3994]: W0116 21:19:11.392302 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.392357 kubelet[3994]: E0116 21:19:11.392313 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.392640 kubelet[3994]: E0116 21:19:11.392591 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.392640 kubelet[3994]: W0116 21:19:11.392598 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.392640 kubelet[3994]: E0116 21:19:11.392606 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.392928 kubelet[3994]: E0116 21:19:11.392887 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.392928 kubelet[3994]: W0116 21:19:11.392894 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.392928 kubelet[3994]: E0116 21:19:11.392902 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.393167 kubelet[3994]: E0116 21:19:11.393131 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.393167 kubelet[3994]: W0116 21:19:11.393138 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.393167 kubelet[3994]: E0116 21:19:11.393146 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.393375 kubelet[3994]: E0116 21:19:11.393341 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.393375 kubelet[3994]: W0116 21:19:11.393346 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.393375 kubelet[3994]: E0116 21:19:11.393354 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.393599 kubelet[3994]: E0116 21:19:11.393572 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.393599 kubelet[3994]: W0116 21:19:11.393579 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.393599 kubelet[3994]: E0116 21:19:11.393586 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.393821 kubelet[3994]: E0116 21:19:11.393781 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.393821 kubelet[3994]: W0116 21:19:11.393786 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.393821 kubelet[3994]: E0116 21:19:11.393793 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.394010 kubelet[3994]: E0116 21:19:11.393979 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.394010 kubelet[3994]: W0116 21:19:11.393985 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.394010 kubelet[3994]: E0116 21:19:11.393991 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.394223 kubelet[3994]: E0116 21:19:11.394208 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.394295 kubelet[3994]: W0116 21:19:11.394258 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.394295 kubelet[3994]: E0116 21:19:11.394267 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.394445 kubelet[3994]: E0116 21:19:11.394441 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.394516 kubelet[3994]: W0116 21:19:11.394479 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.394516 kubelet[3994]: E0116 21:19:11.394485 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.394697 kubelet[3994]: E0116 21:19:11.394659 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.394697 kubelet[3994]: W0116 21:19:11.394666 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.394697 kubelet[3994]: E0116 21:19:11.394672 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.394903 kubelet[3994]: E0116 21:19:11.394876 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.394903 kubelet[3994]: W0116 21:19:11.394883 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.394903 kubelet[3994]: E0116 21:19:11.394890 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.395121 kubelet[3994]: E0116 21:19:11.395094 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.395121 kubelet[3994]: W0116 21:19:11.395100 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.395121 kubelet[3994]: E0116 21:19:11.395106 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.395331 kubelet[3994]: E0116 21:19:11.395288 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.395331 kubelet[3994]: W0116 21:19:11.395293 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.395331 kubelet[3994]: E0116 21:19:11.395299 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.402708 kubelet[3994]: E0116 21:19:11.402669 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.402708 kubelet[3994]: W0116 21:19:11.402681 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.402708 kubelet[3994]: E0116 21:19:11.402693 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.403032 kubelet[3994]: E0116 21:19:11.403011 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.403032 kubelet[3994]: W0116 21:19:11.403021 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.403186 kubelet[3994]: E0116 21:19:11.403105 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.403316 kubelet[3994]: E0116 21:19:11.403304 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.403316 kubelet[3994]: W0116 21:19:11.403314 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.403421 kubelet[3994]: E0116 21:19:11.403326 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.403477 kubelet[3994]: E0116 21:19:11.403470 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.403540 kubelet[3994]: W0116 21:19:11.403509 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.403633 kubelet[3994]: E0116 21:19:11.403578 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.403753 kubelet[3994]: E0116 21:19:11.403749 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.403848 kubelet[3994]: W0116 21:19:11.403788 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.403848 kubelet[3994]: E0116 21:19:11.403799 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.404155 kubelet[3994]: E0116 21:19:11.404139 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.404155 kubelet[3994]: W0116 21:19:11.404147 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.404275 kubelet[3994]: E0116 21:19:11.404215 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.404393 kubelet[3994]: E0116 21:19:11.404387 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.404458 kubelet[3994]: W0116 21:19:11.404452 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.404551 kubelet[3994]: E0116 21:19:11.404497 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.404697 kubelet[3994]: E0116 21:19:11.404692 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.404767 kubelet[3994]: W0116 21:19:11.404726 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.404767 kubelet[3994]: E0116 21:19:11.404741 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.404975 kubelet[3994]: E0116 21:19:11.404937 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.404975 kubelet[3994]: W0116 21:19:11.404943 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.404975 kubelet[3994]: E0116 21:19:11.404955 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.405138 kubelet[3994]: E0116 21:19:11.405126 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.405138 kubelet[3994]: W0116 21:19:11.405131 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.405237 kubelet[3994]: E0116 21:19:11.405193 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.405380 kubelet[3994]: E0116 21:19:11.405368 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.405380 kubelet[3994]: W0116 21:19:11.405374 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.405750 kubelet[3994]: E0116 21:19:11.405524 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.405875 kubelet[3994]: E0116 21:19:11.405868 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.405921 kubelet[3994]: W0116 21:19:11.405915 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.406048 kubelet[3994]: E0116 21:19:11.406043 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.406085 kubelet[3994]: W0116 21:19:11.406080 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.406214 kubelet[3994]: E0116 21:19:11.406209 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.406274 kubelet[3994]: W0116 21:19:11.406269 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.406333 kubelet[3994]: E0116 21:19:11.406307 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.406414 kubelet[3994]: E0116 21:19:11.406245 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.406414 kubelet[3994]: E0116 21:19:11.406252 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.406592 kubelet[3994]: E0116 21:19:11.406542 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.406592 kubelet[3994]: W0116 21:19:11.406549 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.406592 kubelet[3994]: E0116 21:19:11.406556 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.406801 kubelet[3994]: E0116 21:19:11.406748 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.406801 kubelet[3994]: W0116 21:19:11.406754 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.406801 kubelet[3994]: E0116 21:19:11.406760 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.407152 kubelet[3994]: E0116 21:19:11.406990 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.407152 kubelet[3994]: W0116 21:19:11.406996 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.407152 kubelet[3994]: E0116 21:19:11.407004 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.407359 kubelet[3994]: E0116 21:19:11.407353 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:11.407392 kubelet[3994]: W0116 21:19:11.407387 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:11.407426 kubelet[3994]: E0116 21:19:11.407420 3994 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:11.432004 containerd[2539]: time="2026-01-16T21:19:11.431978064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:11.434012 containerd[2539]: time="2026-01-16T21:19:11.433948461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 16 21:19:11.436240 containerd[2539]: time="2026-01-16T21:19:11.436220243Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:11.438856 containerd[2539]: time="2026-01-16T21:19:11.438748585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:11.439389 containerd[2539]: time="2026-01-16T21:19:11.439013304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.129090365s" Jan 16 21:19:11.439389 containerd[2539]: time="2026-01-16T21:19:11.439036191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 16 21:19:11.440593 containerd[2539]: time="2026-01-16T21:19:11.440568397Z" level=info msg="CreateContainer within sandbox \"76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 21:19:11.455903 containerd[2539]: time="2026-01-16T21:19:11.454943614Z" level=info msg="Container 41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:19:11.457567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499636346.mount: Deactivated successfully. Jan 16 21:19:11.469726 containerd[2539]: time="2026-01-16T21:19:11.469704441Z" level=info msg="CreateContainer within sandbox \"76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21\"" Jan 16 21:19:11.470144 containerd[2539]: time="2026-01-16T21:19:11.470053181Z" level=info msg="StartContainer for \"41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21\"" Jan 16 21:19:11.472239 containerd[2539]: time="2026-01-16T21:19:11.472212000Z" level=info msg="connecting to shim 41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21" address="unix:///run/containerd/s/e6a190a3bb24c25561ff53b7d2260f1269d8cedd87a7343a17943cb83e433797" protocol=ttrpc version=3 Jan 16 21:19:11.495979 systemd[1]: Started cri-containerd-41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21.scope - libcontainer container 41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21. Jan 16 21:19:11.532000 audit: BPF prog-id=189 op=LOAD Jan 16 21:19:11.532000 audit[4649]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4500 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:11.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653637323938313032633836383762306331363432343133383634 Jan 16 21:19:11.532000 audit: BPF prog-id=190 op=LOAD Jan 16 21:19:11.532000 audit[4649]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4500 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:11.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653637323938313032633836383762306331363432343133383634 Jan 16 21:19:11.532000 audit: BPF prog-id=190 op=UNLOAD Jan 16 21:19:11.532000 audit[4649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:11.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653637323938313032633836383762306331363432343133383634 Jan 16 21:19:11.532000 audit: BPF prog-id=189 op=UNLOAD Jan 16 21:19:11.532000 audit[4649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:11.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653637323938313032633836383762306331363432343133383634 Jan 16 21:19:11.532000 audit: BPF prog-id=191 op=LOAD Jan 16 21:19:11.532000 audit[4649]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4500 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:11.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653637323938313032633836383762306331363432343133383634 Jan 16 21:19:11.550011 containerd[2539]: time="2026-01-16T21:19:11.549947525Z" level=info msg="StartContainer for \"41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21\" returns successfully" Jan 16 21:19:11.554050 systemd[1]: cri-containerd-41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21.scope: Deactivated successfully. Jan 16 21:19:11.556727 containerd[2539]: time="2026-01-16T21:19:11.556705014Z" level=info msg="received container exit event container_id:\"41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21\" id:\"41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21\" pid:4663 exited_at:{seconds:1768598351 nanos:556401326}" Jan 16 21:19:11.556000 audit: BPF prog-id=191 op=UNLOAD Jan 16 21:19:11.569884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41e67298102c8687b0c1642413864cf2b73e5f1d80969fc2be116d6e54d7ce21-rootfs.mount: Deactivated successfully. Jan 16 21:19:12.252850 kubelet[3994]: E0116 21:19:12.252492 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:12.312749 kubelet[3994]: I0116 21:19:12.312738 3994 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 21:19:14.252488 kubelet[3994]: E0116 21:19:14.252262 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:16.252165 kubelet[3994]: E0116 21:19:16.251950 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:18.251917 kubelet[3994]: E0116 21:19:18.251704 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:20.252071 kubelet[3994]: E0116 21:19:20.251812 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:21.328508 containerd[2539]: time="2026-01-16T21:19:21.328461129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 21:19:22.253266 kubelet[3994]: E0116 21:19:22.253032 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:24.251859 kubelet[3994]: E0116 21:19:24.251823 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:26.252861 kubelet[3994]: E0116 21:19:26.252244 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:28.252748 kubelet[3994]: E0116 21:19:28.252724 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:29.098025 containerd[2539]: time="2026-01-16T21:19:29.097993419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:29.100533 containerd[2539]: time="2026-01-16T21:19:29.100422633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 16 21:19:29.103002 containerd[2539]: time="2026-01-16T21:19:29.102971165Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:29.106657 containerd[2539]: time="2026-01-16T21:19:29.106604160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:29.107070 containerd[2539]: time="2026-01-16T21:19:29.106985486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.778469144s" Jan 16 21:19:29.107070 containerd[2539]: time="2026-01-16T21:19:29.107008324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 16 21:19:29.109235 containerd[2539]: time="2026-01-16T21:19:29.109215132Z" level=info msg="CreateContainer within sandbox \"76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 21:19:29.126129 containerd[2539]: time="2026-01-16T21:19:29.125050510Z" level=info msg="Container 78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:19:29.137784 containerd[2539]: time="2026-01-16T21:19:29.137762220Z" level=info msg="CreateContainer within sandbox \"76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d\"" Jan 16 21:19:29.138294 containerd[2539]: time="2026-01-16T21:19:29.138260810Z" level=info msg="StartContainer for \"78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d\"" Jan 16 21:19:29.139753 containerd[2539]: time="2026-01-16T21:19:29.139727721Z" level=info msg="connecting to shim 78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d" address="unix:///run/containerd/s/e6a190a3bb24c25561ff53b7d2260f1269d8cedd87a7343a17943cb83e433797" protocol=ttrpc version=3 Jan 16 21:19:29.160002 systemd[1]: Started cri-containerd-78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d.scope - libcontainer container 78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d. Jan 16 21:19:29.210650 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 16 21:19:29.210736 kernel: audit: type=1334 audit(1768598369.208:577): prog-id=192 op=LOAD Jan 16 21:19:29.208000 audit: BPF prog-id=192 op=LOAD Jan 16 21:19:29.208000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4500 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:29.215689 kernel: audit: type=1300 audit(1768598369.208:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4500 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:29.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323738663830383933306638663264653839303864336463613338 Jan 16 21:19:29.222442 kernel: audit: type=1327 audit(1768598369.208:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323738663830383933306638663264653839303864336463613338 Jan 16 21:19:29.224278 kernel: audit: type=1334 audit(1768598369.208:578): prog-id=193 op=LOAD Jan 16 21:19:29.208000 audit: BPF prog-id=193 op=LOAD Jan 16 21:19:29.231352 kernel: audit: type=1300 audit(1768598369.208:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4500 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:29.208000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4500 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:29.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323738663830383933306638663264653839303864336463613338 Jan 16 21:19:29.208000 audit: BPF prog-id=193 op=UNLOAD Jan 16 21:19:29.244023 kernel: audit: type=1327 audit(1768598369.208:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323738663830383933306638663264653839303864336463613338 Jan 16 21:19:29.244087 kernel: audit: type=1334 audit(1768598369.208:579): prog-id=193 op=UNLOAD Jan 16 21:19:29.208000 audit[4712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:29.250096 kernel: audit: type=1300 audit(1768598369.208:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:29.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323738663830383933306638663264653839303864336463613338 Jan 16 21:19:29.257308 kernel: audit: type=1327 audit(1768598369.208:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323738663830383933306638663264653839303864336463613338 Jan 16 21:19:29.208000 audit: BPF prog-id=192 op=UNLOAD Jan 16 21:19:29.260747 kernel: audit: type=1334 audit(1768598369.208:580): prog-id=192 op=UNLOAD Jan 16 21:19:29.208000 audit[4712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:29.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323738663830383933306638663264653839303864336463613338 Jan 16 21:19:29.208000 audit: BPF prog-id=194 op=LOAD Jan 16 21:19:29.208000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4500 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:29.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323738663830383933306638663264653839303864336463613338 Jan 16 21:19:29.265815 containerd[2539]: time="2026-01-16T21:19:29.265764036Z" level=info msg="StartContainer for \"78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d\" returns successfully" Jan 16 21:19:30.252063 kubelet[3994]: E0116 21:19:30.251970 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:30.549870 containerd[2539]: time="2026-01-16T21:19:30.549213922Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 21:19:30.550624 systemd[1]: cri-containerd-78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d.scope: Deactivated successfully. Jan 16 21:19:30.551185 systemd[1]: cri-containerd-78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d.scope: Consumed 371ms CPU time, 196.7M memory peak, 171.3M written to disk. Jan 16 21:19:30.552588 containerd[2539]: time="2026-01-16T21:19:30.552548872Z" level=info msg="received container exit event container_id:\"78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d\" id:\"78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d\" pid:4726 exited_at:{seconds:1768598370 nanos:552414953}" Jan 16 21:19:30.554000 audit: BPF prog-id=194 op=UNLOAD Jan 16 21:19:30.568909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78278f808930f8f2de8908d3dca38ac6906a8a3241549c4f4e9e365833df3e3d-rootfs.mount: Deactivated successfully. Jan 16 21:19:30.649063 kubelet[3994]: I0116 21:19:30.649049 3994 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 21:19:30.689276 systemd[1]: Created slice kubepods-burstable-pod1afe31de_e2b2_4f78_ad0e_413ab9447575.slice - libcontainer container kubepods-burstable-pod1afe31de_e2b2_4f78_ad0e_413ab9447575.slice. Jan 16 21:19:30.701711 systemd[1]: Created slice kubepods-besteffort-pod8b6a9edf_78a2_4eb2_9228_633a08a758ae.slice - libcontainer container kubepods-besteffort-pod8b6a9edf_78a2_4eb2_9228_633a08a758ae.slice. Jan 16 21:19:30.709424 systemd[1]: Created slice kubepods-besteffort-pod2ea28179_07a2_4a0c_9cc7_b4eadca6090c.slice - libcontainer container kubepods-besteffort-pod2ea28179_07a2_4a0c_9cc7_b4eadca6090c.slice. Jan 16 21:19:30.716508 systemd[1]: Created slice kubepods-besteffort-podaafa5b59_2cd9_4228_8f97_19effdd5c1a7.slice - libcontainer container kubepods-besteffort-podaafa5b59_2cd9_4228_8f97_19effdd5c1a7.slice. Jan 16 21:19:30.723125 systemd[1]: Created slice kubepods-besteffort-pod2fb4ecc8_b071_4dfc_9368_f4bc508fc3ef.slice - libcontainer container kubepods-besteffort-pod2fb4ecc8_b071_4dfc_9368_f4bc508fc3ef.slice. Jan 16 21:19:30.726308 systemd[1]: Created slice kubepods-besteffort-pod010e89e0_6574_4783_aa50_97e803ab00dc.slice - libcontainer container kubepods-besteffort-pod010e89e0_6574_4783_aa50_97e803ab00dc.slice. Jan 16 21:19:30.734248 systemd[1]: Created slice kubepods-besteffort-podbe8f7779_a5b2_41ff_909f_1387d7e3242a.slice - libcontainer container kubepods-besteffort-podbe8f7779_a5b2_41ff_909f_1387d7e3242a.slice. Jan 16 21:19:30.739171 kubelet[3994]: I0116 21:19:30.738374 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d443c368-ce47-4d9c-990c-7ed720465c2e-config-volume\") pod \"coredns-668d6bf9bc-pb2m9\" (UID: \"d443c368-ce47-4d9c-990c-7ed720465c2e\") " pod="kube-system/coredns-668d6bf9bc-pb2m9" Jan 16 21:19:30.739171 kubelet[3994]: I0116 21:19:30.738401 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef-goldmane-key-pair\") pod \"goldmane-666569f655-x8pr8\" (UID: \"2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef\") " pod="calico-system/goldmane-666569f655-x8pr8" Jan 16 21:19:30.739171 kubelet[3994]: I0116 21:19:30.738421 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/010e89e0-6574-4783-aa50-97e803ab00dc-tigera-ca-bundle\") pod \"calico-kube-controllers-6d6d7b95fc-88tnz\" (UID: \"010e89e0-6574-4783-aa50-97e803ab00dc\") " pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" Jan 16 21:19:30.739171 kubelet[3994]: I0116 21:19:30.738450 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef-goldmane-ca-bundle\") pod \"goldmane-666569f655-x8pr8\" (UID: \"2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef\") " pod="calico-system/goldmane-666569f655-x8pr8" Jan 16 21:19:30.739171 kubelet[3994]: I0116 21:19:30.738468 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b6a9edf-78a2-4eb2-9228-633a08a758ae-calico-apiserver-certs\") pod \"calico-apiserver-d8d4c957-8nr7d\" (UID: \"8b6a9edf-78a2-4eb2-9228-633a08a758ae\") " pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" Jan 16 21:19:30.739043 systemd[1]: Created slice kubepods-burstable-podd443c368_ce47_4d9c_990c_7ed720465c2e.slice - libcontainer container kubepods-burstable-podd443c368_ce47_4d9c_990c_7ed720465c2e.slice. Jan 16 21:19:30.739375 kubelet[3994]: I0116 21:19:30.738488 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9tlp\" (UniqueName: \"kubernetes.io/projected/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-kube-api-access-h9tlp\") pod \"whisker-7d9cdfd9b7-gg8mp\" (UID: \"aafa5b59-2cd9-4228-8f97-19effdd5c1a7\") " pod="calico-system/whisker-7d9cdfd9b7-gg8mp" Jan 16 21:19:30.739375 kubelet[3994]: I0116 21:19:30.738514 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pjdd\" (UniqueName: \"kubernetes.io/projected/1afe31de-e2b2-4f78-ad0e-413ab9447575-kube-api-access-9pjdd\") pod \"coredns-668d6bf9bc-s6qvx\" (UID: \"1afe31de-e2b2-4f78-ad0e-413ab9447575\") " pod="kube-system/coredns-668d6bf9bc-s6qvx" Jan 16 21:19:30.739375 kubelet[3994]: I0116 21:19:30.738531 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1afe31de-e2b2-4f78-ad0e-413ab9447575-config-volume\") pod \"coredns-668d6bf9bc-s6qvx\" (UID: \"1afe31de-e2b2-4f78-ad0e-413ab9447575\") " pod="kube-system/coredns-668d6bf9bc-s6qvx" Jan 16 21:19:30.739375 kubelet[3994]: I0116 21:19:30.738549 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5wm9\" (UniqueName: \"kubernetes.io/projected/d443c368-ce47-4d9c-990c-7ed720465c2e-kube-api-access-d5wm9\") pod \"coredns-668d6bf9bc-pb2m9\" (UID: \"d443c368-ce47-4d9c-990c-7ed720465c2e\") " pod="kube-system/coredns-668d6bf9bc-pb2m9" Jan 16 21:19:30.739375 kubelet[3994]: I0116 21:19:30.738566 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef-config\") pod \"goldmane-666569f655-x8pr8\" (UID: \"2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef\") " pod="calico-system/goldmane-666569f655-x8pr8" Jan 16 21:19:30.739477 kubelet[3994]: I0116 21:19:30.738591 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbbx8\" (UniqueName: \"kubernetes.io/projected/2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef-kube-api-access-pbbx8\") pod \"goldmane-666569f655-x8pr8\" (UID: \"2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef\") " pod="calico-system/goldmane-666569f655-x8pr8" Jan 16 21:19:30.739477 kubelet[3994]: I0116 21:19:30.738610 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69pf\" (UniqueName: \"kubernetes.io/projected/8b6a9edf-78a2-4eb2-9228-633a08a758ae-kube-api-access-m69pf\") pod \"calico-apiserver-d8d4c957-8nr7d\" (UID: \"8b6a9edf-78a2-4eb2-9228-633a08a758ae\") " pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" Jan 16 21:19:30.739477 kubelet[3994]: I0116 21:19:30.738628 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ea28179-07a2-4a0c-9cc7-b4eadca6090c-calico-apiserver-certs\") pod \"calico-apiserver-64fcf4db84-9ph5q\" (UID: \"2ea28179-07a2-4a0c-9cc7-b4eadca6090c\") " pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" Jan 16 21:19:30.739477 kubelet[3994]: I0116 21:19:30.738647 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-whisker-ca-bundle\") pod \"whisker-7d9cdfd9b7-gg8mp\" (UID: \"aafa5b59-2cd9-4228-8f97-19effdd5c1a7\") " pod="calico-system/whisker-7d9cdfd9b7-gg8mp" Jan 16 21:19:30.739477 kubelet[3994]: I0116 21:19:30.738675 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntzvm\" (UniqueName: \"kubernetes.io/projected/be8f7779-a5b2-41ff-909f-1387d7e3242a-kube-api-access-ntzvm\") pod \"calico-apiserver-d8d4c957-cnh4g\" (UID: \"be8f7779-a5b2-41ff-909f-1387d7e3242a\") " pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" Jan 16 21:19:30.739582 kubelet[3994]: I0116 21:19:30.738693 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dm8c\" (UniqueName: \"kubernetes.io/projected/2ea28179-07a2-4a0c-9cc7-b4eadca6090c-kube-api-access-5dm8c\") pod \"calico-apiserver-64fcf4db84-9ph5q\" (UID: \"2ea28179-07a2-4a0c-9cc7-b4eadca6090c\") " pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" Jan 16 21:19:30.739582 kubelet[3994]: I0116 21:19:30.738710 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dbrg\" (UniqueName: \"kubernetes.io/projected/010e89e0-6574-4783-aa50-97e803ab00dc-kube-api-access-4dbrg\") pod \"calico-kube-controllers-6d6d7b95fc-88tnz\" (UID: \"010e89e0-6574-4783-aa50-97e803ab00dc\") " pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" Jan 16 21:19:30.739582 kubelet[3994]: I0116 21:19:30.738728 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be8f7779-a5b2-41ff-909f-1387d7e3242a-calico-apiserver-certs\") pod \"calico-apiserver-d8d4c957-cnh4g\" (UID: \"be8f7779-a5b2-41ff-909f-1387d7e3242a\") " pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" Jan 16 21:19:30.739582 kubelet[3994]: I0116 21:19:30.738761 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-whisker-backend-key-pair\") pod \"whisker-7d9cdfd9b7-gg8mp\" (UID: \"aafa5b59-2cd9-4228-8f97-19effdd5c1a7\") " pod="calico-system/whisker-7d9cdfd9b7-gg8mp" Jan 16 21:19:31.030920 containerd[2539]: time="2026-01-16T21:19:31.030753173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6qvx,Uid:1afe31de-e2b2-4f78-ad0e-413ab9447575,Namespace:kube-system,Attempt:0,}" Jan 16 21:19:31.031066 containerd[2539]: time="2026-01-16T21:19:31.031050485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d7b95fc-88tnz,Uid:010e89e0-6574-4783-aa50-97e803ab00dc,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:31.031162 containerd[2539]: time="2026-01-16T21:19:31.031144972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8pr8,Uid:2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:31.031263 containerd[2539]: time="2026-01-16T21:19:31.031249414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d9cdfd9b7-gg8mp,Uid:aafa5b59-2cd9-4228-8f97-19effdd5c1a7,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:31.031320 containerd[2539]: time="2026-01-16T21:19:31.031294598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-8nr7d,Uid:8b6a9edf-78a2-4eb2-9228-633a08a758ae,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:19:31.031449 containerd[2539]: time="2026-01-16T21:19:31.031437169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fcf4db84-9ph5q,Uid:2ea28179-07a2-4a0c-9cc7-b4eadca6090c,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:19:31.037273 containerd[2539]: time="2026-01-16T21:19:31.037228982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-cnh4g,Uid:be8f7779-a5b2-41ff-909f-1387d7e3242a,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:19:31.041773 containerd[2539]: time="2026-01-16T21:19:31.041750535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pb2m9,Uid:d443c368-ce47-4d9c-990c-7ed720465c2e,Namespace:kube-system,Attempt:0,}" Jan 16 21:19:31.617262 containerd[2539]: time="2026-01-16T21:19:31.617231999Z" level=error msg="Failed to destroy network for sandbox \"120e4d3be3c84e7f6173be82b5b6739fb47c7d822e706de0b94ba36df0360943\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.641728 containerd[2539]: time="2026-01-16T21:19:31.641690744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6qvx,Uid:1afe31de-e2b2-4f78-ad0e-413ab9447575,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"120e4d3be3c84e7f6173be82b5b6739fb47c7d822e706de0b94ba36df0360943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.642164 kubelet[3994]: E0116 21:19:31.641840 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120e4d3be3c84e7f6173be82b5b6739fb47c7d822e706de0b94ba36df0360943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.642164 kubelet[3994]: E0116 21:19:31.641888 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120e4d3be3c84e7f6173be82b5b6739fb47c7d822e706de0b94ba36df0360943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s6qvx" Jan 16 21:19:31.642164 kubelet[3994]: E0116 21:19:31.641907 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120e4d3be3c84e7f6173be82b5b6739fb47c7d822e706de0b94ba36df0360943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s6qvx" Jan 16 21:19:31.642422 kubelet[3994]: E0116 21:19:31.641943 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s6qvx_kube-system(1afe31de-e2b2-4f78-ad0e-413ab9447575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s6qvx_kube-system(1afe31de-e2b2-4f78-ad0e-413ab9447575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"120e4d3be3c84e7f6173be82b5b6739fb47c7d822e706de0b94ba36df0360943\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s6qvx" podUID="1afe31de-e2b2-4f78-ad0e-413ab9447575" Jan 16 21:19:31.722154 containerd[2539]: time="2026-01-16T21:19:31.722107772Z" level=error msg="Failed to destroy network for sandbox \"9107ca1b2e0f4589d0ee8b7879e2bac9368a593e41c6a4469e1849a31d035122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.728925 containerd[2539]: time="2026-01-16T21:19:31.728881983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pb2m9,Uid:d443c368-ce47-4d9c-990c-7ed720465c2e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9107ca1b2e0f4589d0ee8b7879e2bac9368a593e41c6a4469e1849a31d035122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.729575 kubelet[3994]: E0116 21:19:31.729021 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9107ca1b2e0f4589d0ee8b7879e2bac9368a593e41c6a4469e1849a31d035122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.729575 kubelet[3994]: E0116 21:19:31.729061 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9107ca1b2e0f4589d0ee8b7879e2bac9368a593e41c6a4469e1849a31d035122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pb2m9" Jan 16 21:19:31.729575 kubelet[3994]: E0116 21:19:31.729078 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9107ca1b2e0f4589d0ee8b7879e2bac9368a593e41c6a4469e1849a31d035122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pb2m9" Jan 16 21:19:31.729676 kubelet[3994]: E0116 21:19:31.729112 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pb2m9_kube-system(d443c368-ce47-4d9c-990c-7ed720465c2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pb2m9_kube-system(d443c368-ce47-4d9c-990c-7ed720465c2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9107ca1b2e0f4589d0ee8b7879e2bac9368a593e41c6a4469e1849a31d035122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pb2m9" podUID="d443c368-ce47-4d9c-990c-7ed720465c2e" Jan 16 21:19:31.741985 containerd[2539]: time="2026-01-16T21:19:31.741904487Z" level=error msg="Failed to destroy network for sandbox \"ac2b08a81c0d13f98a4c24cf125894354977e7470d94135df0e837c8fee60df6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.748374 containerd[2539]: time="2026-01-16T21:19:31.748339766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d7b95fc-88tnz,Uid:010e89e0-6574-4783-aa50-97e803ab00dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2b08a81c0d13f98a4c24cf125894354977e7470d94135df0e837c8fee60df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.748783 kubelet[3994]: E0116 21:19:31.748481 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2b08a81c0d13f98a4c24cf125894354977e7470d94135df0e837c8fee60df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.748783 kubelet[3994]: E0116 21:19:31.748527 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2b08a81c0d13f98a4c24cf125894354977e7470d94135df0e837c8fee60df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" Jan 16 21:19:31.748783 kubelet[3994]: E0116 21:19:31.748545 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2b08a81c0d13f98a4c24cf125894354977e7470d94135df0e837c8fee60df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" Jan 16 21:19:31.748901 kubelet[3994]: E0116 21:19:31.748576 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d6d7b95fc-88tnz_calico-system(010e89e0-6574-4783-aa50-97e803ab00dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d6d7b95fc-88tnz_calico-system(010e89e0-6574-4783-aa50-97e803ab00dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac2b08a81c0d13f98a4c24cf125894354977e7470d94135df0e837c8fee60df6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:19:31.750622 containerd[2539]: time="2026-01-16T21:19:31.750592546Z" level=error msg="Failed to destroy network for sandbox \"3ebb9c45de806aade4424b2c6219a281953e4c367b35c707137370bdd3acf640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.756961 containerd[2539]: time="2026-01-16T21:19:31.756931849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d9cdfd9b7-gg8mp,Uid:aafa5b59-2cd9-4228-8f97-19effdd5c1a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebb9c45de806aade4424b2c6219a281953e4c367b35c707137370bdd3acf640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.757721 kubelet[3994]: E0116 21:19:31.757177 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebb9c45de806aade4424b2c6219a281953e4c367b35c707137370bdd3acf640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.757721 kubelet[3994]: E0116 21:19:31.757209 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebb9c45de806aade4424b2c6219a281953e4c367b35c707137370bdd3acf640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d9cdfd9b7-gg8mp" Jan 16 21:19:31.757721 kubelet[3994]: E0116 21:19:31.757224 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebb9c45de806aade4424b2c6219a281953e4c367b35c707137370bdd3acf640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d9cdfd9b7-gg8mp" Jan 16 21:19:31.757846 kubelet[3994]: E0116 21:19:31.757257 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d9cdfd9b7-gg8mp_calico-system(aafa5b59-2cd9-4228-8f97-19effdd5c1a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d9cdfd9b7-gg8mp_calico-system(aafa5b59-2cd9-4228-8f97-19effdd5c1a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ebb9c45de806aade4424b2c6219a281953e4c367b35c707137370bdd3acf640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d9cdfd9b7-gg8mp" podUID="aafa5b59-2cd9-4228-8f97-19effdd5c1a7" Jan 16 21:19:31.764624 containerd[2539]: time="2026-01-16T21:19:31.764601188Z" level=error msg="Failed to destroy network for sandbox \"e2230fd36e2fdca5bc04389117647caec0016f7603d05e70be191211243ea357\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.764889 containerd[2539]: time="2026-01-16T21:19:31.764859444Z" level=error msg="Failed to destroy network for sandbox \"414bb12d7564161aa9046b7ec42b4417ca9ad4c99ed4045822607ef378054ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.765176 containerd[2539]: time="2026-01-16T21:19:31.765160010Z" level=error msg="Failed to destroy network for sandbox \"89bdc481df4fcdeb6f5651a04e2db1978d87af3ed3dbc83839c9170321c23845\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.768851 containerd[2539]: time="2026-01-16T21:19:31.768820977Z" level=error msg="Failed to destroy network for sandbox \"2513f6ca29735f4663a056a7bcdd3968640a88e070d106fa2e1a2ac95da542e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.773425 containerd[2539]: time="2026-01-16T21:19:31.773399504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-8nr7d,Uid:8b6a9edf-78a2-4eb2-9228-633a08a758ae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2230fd36e2fdca5bc04389117647caec0016f7603d05e70be191211243ea357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.773560 kubelet[3994]: E0116 21:19:31.773538 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2230fd36e2fdca5bc04389117647caec0016f7603d05e70be191211243ea357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.773599 kubelet[3994]: E0116 21:19:31.773570 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2230fd36e2fdca5bc04389117647caec0016f7603d05e70be191211243ea357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" Jan 16 21:19:31.773599 kubelet[3994]: E0116 21:19:31.773585 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2230fd36e2fdca5bc04389117647caec0016f7603d05e70be191211243ea357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" Jan 16 21:19:31.773666 kubelet[3994]: E0116 21:19:31.773638 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8d4c957-8nr7d_calico-apiserver(8b6a9edf-78a2-4eb2-9228-633a08a758ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8d4c957-8nr7d_calico-apiserver(8b6a9edf-78a2-4eb2-9228-633a08a758ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2230fd36e2fdca5bc04389117647caec0016f7603d05e70be191211243ea357\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:19:31.777658 containerd[2539]: time="2026-01-16T21:19:31.777625782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-cnh4g,Uid:be8f7779-a5b2-41ff-909f-1387d7e3242a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"414bb12d7564161aa9046b7ec42b4417ca9ad4c99ed4045822607ef378054ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.777863 kubelet[3994]: E0116 21:19:31.777746 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"414bb12d7564161aa9046b7ec42b4417ca9ad4c99ed4045822607ef378054ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.777863 kubelet[3994]: E0116 21:19:31.777774 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"414bb12d7564161aa9046b7ec42b4417ca9ad4c99ed4045822607ef378054ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" Jan 16 21:19:31.777863 kubelet[3994]: E0116 21:19:31.777785 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"414bb12d7564161aa9046b7ec42b4417ca9ad4c99ed4045822607ef378054ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" Jan 16 21:19:31.777953 kubelet[3994]: E0116 21:19:31.777812 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8d4c957-cnh4g_calico-apiserver(be8f7779-a5b2-41ff-909f-1387d7e3242a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8d4c957-cnh4g_calico-apiserver(be8f7779-a5b2-41ff-909f-1387d7e3242a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"414bb12d7564161aa9046b7ec42b4417ca9ad4c99ed4045822607ef378054ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:19:31.779828 containerd[2539]: time="2026-01-16T21:19:31.779761701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8pr8,Uid:2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bdc481df4fcdeb6f5651a04e2db1978d87af3ed3dbc83839c9170321c23845\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.779935 kubelet[3994]: E0116 21:19:31.779915 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bdc481df4fcdeb6f5651a04e2db1978d87af3ed3dbc83839c9170321c23845\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.779980 kubelet[3994]: E0116 21:19:31.779964 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bdc481df4fcdeb6f5651a04e2db1978d87af3ed3dbc83839c9170321c23845\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x8pr8" Jan 16 21:19:31.780007 kubelet[3994]: E0116 21:19:31.779985 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bdc481df4fcdeb6f5651a04e2db1978d87af3ed3dbc83839c9170321c23845\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x8pr8" Jan 16 21:19:31.780048 kubelet[3994]: E0116 21:19:31.780028 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-x8pr8_calico-system(2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-x8pr8_calico-system(2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89bdc481df4fcdeb6f5651a04e2db1978d87af3ed3dbc83839c9170321c23845\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:19:31.781820 containerd[2539]: time="2026-01-16T21:19:31.781799693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fcf4db84-9ph5q,Uid:2ea28179-07a2-4a0c-9cc7-b4eadca6090c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2513f6ca29735f4663a056a7bcdd3968640a88e070d106fa2e1a2ac95da542e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.782025 kubelet[3994]: E0116 21:19:31.782011 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2513f6ca29735f4663a056a7bcdd3968640a88e070d106fa2e1a2ac95da542e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:31.782086 kubelet[3994]: E0116 21:19:31.782034 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2513f6ca29735f4663a056a7bcdd3968640a88e070d106fa2e1a2ac95da542e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" Jan 16 21:19:31.782086 kubelet[3994]: E0116 21:19:31.782050 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2513f6ca29735f4663a056a7bcdd3968640a88e070d106fa2e1a2ac95da542e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" Jan 16 21:19:31.782086 kubelet[3994]: E0116 21:19:31.782077 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64fcf4db84-9ph5q_calico-apiserver(2ea28179-07a2-4a0c-9cc7-b4eadca6090c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64fcf4db84-9ph5q_calico-apiserver(2ea28179-07a2-4a0c-9cc7-b4eadca6090c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2513f6ca29735f4663a056a7bcdd3968640a88e070d106fa2e1a2ac95da542e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:19:32.256405 systemd[1]: Created slice kubepods-besteffort-podad5a70af_916a_4e95_9866_1f1c8f4329d0.slice - libcontainer container kubepods-besteffort-podad5a70af_916a_4e95_9866_1f1c8f4329d0.slice. Jan 16 21:19:32.258422 containerd[2539]: time="2026-01-16T21:19:32.258404545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pf8t,Uid:ad5a70af-916a-4e95-9866-1f1c8f4329d0,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:32.297484 containerd[2539]: time="2026-01-16T21:19:32.297453582Z" level=error msg="Failed to destroy network for sandbox \"a4483e50bd68a3886be2c84461367e3bd88f1d21c322e55e0362add3b5034759\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:32.302123 containerd[2539]: time="2026-01-16T21:19:32.302082384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pf8t,Uid:ad5a70af-916a-4e95-9866-1f1c8f4329d0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4483e50bd68a3886be2c84461367e3bd88f1d21c322e55e0362add3b5034759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:32.302424 kubelet[3994]: E0116 21:19:32.302389 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4483e50bd68a3886be2c84461367e3bd88f1d21c322e55e0362add3b5034759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:32.302484 kubelet[3994]: E0116 21:19:32.302435 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4483e50bd68a3886be2c84461367e3bd88f1d21c322e55e0362add3b5034759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pf8t" Jan 16 21:19:32.302484 kubelet[3994]: E0116 21:19:32.302451 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4483e50bd68a3886be2c84461367e3bd88f1d21c322e55e0362add3b5034759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pf8t" Jan 16 21:19:32.302538 kubelet[3994]: E0116 21:19:32.302482 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4483e50bd68a3886be2c84461367e3bd88f1d21c322e55e0362add3b5034759\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:32.347849 containerd[2539]: time="2026-01-16T21:19:32.347717299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 21:19:32.569052 systemd[1]: run-netns-cni\x2d018d43ac\x2df1ba\x2d2ed2\x2d0cfd\x2d977b00c062b8.mount: Deactivated successfully. Jan 16 21:19:32.569125 systemd[1]: run-netns-cni\x2dba4b3d1b\x2ded40\x2de2c6\x2d1074\x2d372bf38734cf.mount: Deactivated successfully. Jan 16 21:19:32.569168 systemd[1]: run-netns-cni\x2d20c5ed0f\x2d9908\x2dbeb6\x2d3868\x2dd0681ad1ada3.mount: Deactivated successfully. Jan 16 21:19:32.569208 systemd[1]: run-netns-cni\x2d604623bb\x2d3beb\x2d77b7\x2d6af2\x2d0f1227108fc6.mount: Deactivated successfully. Jan 16 21:19:32.569246 systemd[1]: run-netns-cni\x2d22a88335\x2d6544\x2d655f\x2d22ac\x2da8849c4613b7.mount: Deactivated successfully. Jan 16 21:19:32.569286 systemd[1]: run-netns-cni\x2dd8258223\x2d2c18\x2d1ca2\x2d7406\x2d3127a3b5b5bf.mount: Deactivated successfully. Jan 16 21:19:32.569325 systemd[1]: run-netns-cni\x2d08db5a43\x2d12fe\x2d3a0b\x2df2c5\x2dc12899ec0a4f.mount: Deactivated successfully. Jan 16 21:19:34.688846 kubelet[3994]: I0116 21:19:34.688797 3994 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 21:19:34.726540 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 16 21:19:34.726614 kernel: audit: type=1325 audit(1768598374.723:583): table=filter:120 family=2 entries=21 op=nft_register_rule pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:34.723000 audit[5004]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:34.731851 kernel: audit: type=1300 audit(1768598374.723:583): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff66909ff0 a2=0 a3=7fff66909fdc items=0 ppid=4151 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:34.723000 audit[5004]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff66909ff0 a2=0 a3=7fff66909fdc items=0 ppid=4151 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:34.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:34.740845 kernel: audit: type=1327 audit(1768598374.723:583): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:34.737000 audit[5004]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:34.743954 kernel: audit: type=1325 audit(1768598374.737:584): table=nat:121 family=2 entries=19 op=nft_register_chain pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:34.737000 audit[5004]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff66909ff0 a2=0 a3=7fff66909fdc items=0 ppid=4151 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:34.750387 kernel: audit: type=1300 audit(1768598374.737:584): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff66909ff0 a2=0 a3=7fff66909fdc items=0 ppid=4151 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:34.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:34.754123 kernel: audit: type=1327 audit(1768598374.737:584): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:43.252609 containerd[2539]: time="2026-01-16T21:19:43.252320703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d9cdfd9b7-gg8mp,Uid:aafa5b59-2cd9-4228-8f97-19effdd5c1a7,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:43.252609 containerd[2539]: time="2026-01-16T21:19:43.252523939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-cnh4g,Uid:be8f7779-a5b2-41ff-909f-1387d7e3242a,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:19:43.253033 containerd[2539]: time="2026-01-16T21:19:43.252696683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-8nr7d,Uid:8b6a9edf-78a2-4eb2-9228-633a08a758ae,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:19:43.253033 containerd[2539]: time="2026-01-16T21:19:43.252760234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6qvx,Uid:1afe31de-e2b2-4f78-ad0e-413ab9447575,Namespace:kube-system,Attempt:0,}" Jan 16 21:19:43.253033 containerd[2539]: time="2026-01-16T21:19:43.252954145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fcf4db84-9ph5q,Uid:2ea28179-07a2-4a0c-9cc7-b4eadca6090c,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:19:43.279760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2940266917.mount: Deactivated successfully. Jan 16 21:19:43.751197 containerd[2539]: time="2026-01-16T21:19:43.751147818Z" level=error msg="Failed to destroy network for sandbox \"5b3d20c96a3dd714e77db21119a3d187c72cc2c9a962810197c1a961f873e714\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:43.752595 systemd[1]: run-netns-cni\x2dc755be6b\x2dde5a\x2dd627\x2d7419\x2d546308fa1aab.mount: Deactivated successfully. Jan 16 21:19:43.862266 containerd[2539]: time="2026-01-16T21:19:43.862227757Z" level=error msg="Failed to destroy network for sandbox \"0222b9d9468ab7395345b17fce117d19d50084c5600c2ee78d8e82447834fadc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:43.907644 containerd[2539]: time="2026-01-16T21:19:43.907618984Z" level=error msg="Failed to destroy network for sandbox \"fb4cd59477ef30c14406a2872bdd61df10c6570af0016cf1bd6249707783b0a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.004368 containerd[2539]: time="2026-01-16T21:19:44.004277786Z" level=error msg="Failed to destroy network for sandbox \"15d67881c38b4b50157c51a60cfebee54dd04a98108d5e5d96a190b5a945714e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.049955 containerd[2539]: time="2026-01-16T21:19:44.049923249Z" level=error msg="Failed to destroy network for sandbox \"e6c9f6d43fa7944dac5d778d22ddb8a320719e6df343a1358ac2bb47c711f584\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.179640 containerd[2539]: time="2026-01-16T21:19:44.179590980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-8nr7d,Uid:8b6a9edf-78a2-4eb2-9228-633a08a758ae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3d20c96a3dd714e77db21119a3d187c72cc2c9a962810197c1a961f873e714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.179821 kubelet[3994]: E0116 21:19:44.179793 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3d20c96a3dd714e77db21119a3d187c72cc2c9a962810197c1a961f873e714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.180047 kubelet[3994]: E0116 21:19:44.179854 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3d20c96a3dd714e77db21119a3d187c72cc2c9a962810197c1a961f873e714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" Jan 16 21:19:44.180047 kubelet[3994]: E0116 21:19:44.179872 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3d20c96a3dd714e77db21119a3d187c72cc2c9a962810197c1a961f873e714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" Jan 16 21:19:44.180047 kubelet[3994]: E0116 21:19:44.179910 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8d4c957-8nr7d_calico-apiserver(8b6a9edf-78a2-4eb2-9228-633a08a758ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8d4c957-8nr7d_calico-apiserver(8b6a9edf-78a2-4eb2-9228-633a08a758ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b3d20c96a3dd714e77db21119a3d187c72cc2c9a962810197c1a961f873e714\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:19:44.252645 containerd[2539]: time="2026-01-16T21:19:44.252622371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d7b95fc-88tnz,Uid:010e89e0-6574-4783-aa50-97e803ab00dc,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:44.252981 containerd[2539]: time="2026-01-16T21:19:44.252916190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8pr8,Uid:2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:44.275000 containerd[2539]: time="2026-01-16T21:19:44.274921414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d9cdfd9b7-gg8mp,Uid:aafa5b59-2cd9-4228-8f97-19effdd5c1a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0222b9d9468ab7395345b17fce117d19d50084c5600c2ee78d8e82447834fadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.275583 kubelet[3994]: E0116 21:19:44.275523 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0222b9d9468ab7395345b17fce117d19d50084c5600c2ee78d8e82447834fadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.275657 kubelet[3994]: E0116 21:19:44.275625 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0222b9d9468ab7395345b17fce117d19d50084c5600c2ee78d8e82447834fadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d9cdfd9b7-gg8mp" Jan 16 21:19:44.275657 kubelet[3994]: E0116 21:19:44.275647 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0222b9d9468ab7395345b17fce117d19d50084c5600c2ee78d8e82447834fadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d9cdfd9b7-gg8mp" Jan 16 21:19:44.275704 kubelet[3994]: E0116 21:19:44.275690 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d9cdfd9b7-gg8mp_calico-system(aafa5b59-2cd9-4228-8f97-19effdd5c1a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d9cdfd9b7-gg8mp_calico-system(aafa5b59-2cd9-4228-8f97-19effdd5c1a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0222b9d9468ab7395345b17fce117d19d50084c5600c2ee78d8e82447834fadc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d9cdfd9b7-gg8mp" podUID="aafa5b59-2cd9-4228-8f97-19effdd5c1a7" Jan 16 21:19:44.280234 systemd[1]: run-netns-cni\x2d89a89deb\x2d4577\x2d2602\x2d15e7\x2d60bc80d134e8.mount: Deactivated successfully. Jan 16 21:19:44.280316 systemd[1]: run-netns-cni\x2d018f89bb\x2d41ef\x2d93d2\x2d8dfd\x2d5d769328516f.mount: Deactivated successfully. Jan 16 21:19:44.321790 containerd[2539]: time="2026-01-16T21:19:44.321744606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-cnh4g,Uid:be8f7779-a5b2-41ff-909f-1387d7e3242a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb4cd59477ef30c14406a2872bdd61df10c6570af0016cf1bd6249707783b0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.321934 kubelet[3994]: E0116 21:19:44.321895 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb4cd59477ef30c14406a2872bdd61df10c6570af0016cf1bd6249707783b0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.321984 kubelet[3994]: E0116 21:19:44.321945 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb4cd59477ef30c14406a2872bdd61df10c6570af0016cf1bd6249707783b0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" Jan 16 21:19:44.321984 kubelet[3994]: E0116 21:19:44.321961 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb4cd59477ef30c14406a2872bdd61df10c6570af0016cf1bd6249707783b0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" Jan 16 21:19:44.322033 kubelet[3994]: E0116 21:19:44.321994 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8d4c957-cnh4g_calico-apiserver(be8f7779-a5b2-41ff-909f-1387d7e3242a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8d4c957-cnh4g_calico-apiserver(be8f7779-a5b2-41ff-909f-1387d7e3242a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb4cd59477ef30c14406a2872bdd61df10c6570af0016cf1bd6249707783b0a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:19:44.369939 containerd[2539]: time="2026-01-16T21:19:44.369918405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:44.431436 containerd[2539]: time="2026-01-16T21:19:44.431389467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6qvx,Uid:1afe31de-e2b2-4f78-ad0e-413ab9447575,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d67881c38b4b50157c51a60cfebee54dd04a98108d5e5d96a190b5a945714e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.431584 kubelet[3994]: E0116 21:19:44.431546 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d67881c38b4b50157c51a60cfebee54dd04a98108d5e5d96a190b5a945714e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.431629 kubelet[3994]: E0116 21:19:44.431603 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d67881c38b4b50157c51a60cfebee54dd04a98108d5e5d96a190b5a945714e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s6qvx" Jan 16 21:19:44.431629 kubelet[3994]: E0116 21:19:44.431622 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d67881c38b4b50157c51a60cfebee54dd04a98108d5e5d96a190b5a945714e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s6qvx" Jan 16 21:19:44.431679 kubelet[3994]: E0116 21:19:44.431653 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s6qvx_kube-system(1afe31de-e2b2-4f78-ad0e-413ab9447575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s6qvx_kube-system(1afe31de-e2b2-4f78-ad0e-413ab9447575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15d67881c38b4b50157c51a60cfebee54dd04a98108d5e5d96a190b5a945714e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s6qvx" podUID="1afe31de-e2b2-4f78-ad0e-413ab9447575" Jan 16 21:19:44.482294 containerd[2539]: time="2026-01-16T21:19:44.482261913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fcf4db84-9ph5q,Uid:2ea28179-07a2-4a0c-9cc7-b4eadca6090c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c9f6d43fa7944dac5d778d22ddb8a320719e6df343a1358ac2bb47c711f584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.482455 kubelet[3994]: E0116 21:19:44.482429 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c9f6d43fa7944dac5d778d22ddb8a320719e6df343a1358ac2bb47c711f584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.482510 kubelet[3994]: E0116 21:19:44.482464 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c9f6d43fa7944dac5d778d22ddb8a320719e6df343a1358ac2bb47c711f584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" Jan 16 21:19:44.482510 kubelet[3994]: E0116 21:19:44.482478 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c9f6d43fa7944dac5d778d22ddb8a320719e6df343a1358ac2bb47c711f584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" Jan 16 21:19:44.482552 kubelet[3994]: E0116 21:19:44.482508 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64fcf4db84-9ph5q_calico-apiserver(2ea28179-07a2-4a0c-9cc7-b4eadca6090c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64fcf4db84-9ph5q_calico-apiserver(2ea28179-07a2-4a0c-9cc7-b4eadca6090c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6c9f6d43fa7944dac5d778d22ddb8a320719e6df343a1358ac2bb47c711f584\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:19:44.527398 containerd[2539]: time="2026-01-16T21:19:44.527334668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156882266" Jan 16 21:19:44.625885 containerd[2539]: time="2026-01-16T21:19:44.625826315Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:44.699967 containerd[2539]: time="2026-01-16T21:19:44.699913973Z" level=error msg="Failed to destroy network for sandbox \"4d8c2a4fdba2cc1c1e327f5aa2f679ba960195db05bc8910d2e0b40a7ed63140\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.701555 systemd[1]: run-netns-cni\x2d38bd19b5\x2d1e50\x2d085e\x2d3b50\x2dea8b9c103665.mount: Deactivated successfully. Jan 16 21:19:44.763451 containerd[2539]: time="2026-01-16T21:19:44.763418965Z" level=error msg="Failed to destroy network for sandbox \"f2aeccecc777bf97cd490ed6ac728baba9535fac4b91dc3797c5a511124b5677\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.765327 systemd[1]: run-netns-cni\x2d887f7052\x2de1c1\x2dcc7f\x2d183f\x2dd1bf7da4ba58.mount: Deactivated successfully. Jan 16 21:19:44.828446 containerd[2539]: time="2026-01-16T21:19:44.828394011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:44.828846 containerd[2539]: time="2026-01-16T21:19:44.828762975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.481014331s" Jan 16 21:19:44.828846 containerd[2539]: time="2026-01-16T21:19:44.828787638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 16 21:19:44.834950 containerd[2539]: time="2026-01-16T21:19:44.834662417Z" level=info msg="CreateContainer within sandbox \"76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 21:19:44.922894 containerd[2539]: time="2026-01-16T21:19:44.922822502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d7b95fc-88tnz,Uid:010e89e0-6574-4783-aa50-97e803ab00dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8c2a4fdba2cc1c1e327f5aa2f679ba960195db05bc8910d2e0b40a7ed63140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.923217 kubelet[3994]: E0116 21:19:44.923150 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8c2a4fdba2cc1c1e327f5aa2f679ba960195db05bc8910d2e0b40a7ed63140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.923217 kubelet[3994]: E0116 21:19:44.923190 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8c2a4fdba2cc1c1e327f5aa2f679ba960195db05bc8910d2e0b40a7ed63140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" Jan 16 21:19:44.923217 kubelet[3994]: E0116 21:19:44.923208 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8c2a4fdba2cc1c1e327f5aa2f679ba960195db05bc8910d2e0b40a7ed63140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" Jan 16 21:19:44.923314 kubelet[3994]: E0116 21:19:44.923238 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d6d7b95fc-88tnz_calico-system(010e89e0-6574-4783-aa50-97e803ab00dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d6d7b95fc-88tnz_calico-system(010e89e0-6574-4783-aa50-97e803ab00dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d8c2a4fdba2cc1c1e327f5aa2f679ba960195db05bc8910d2e0b40a7ed63140\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:19:44.925197 containerd[2539]: time="2026-01-16T21:19:44.925150451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8pr8,Uid:2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2aeccecc777bf97cd490ed6ac728baba9535fac4b91dc3797c5a511124b5677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.925439 kubelet[3994]: E0116 21:19:44.925404 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2aeccecc777bf97cd490ed6ac728baba9535fac4b91dc3797c5a511124b5677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:44.925490 kubelet[3994]: E0116 21:19:44.925449 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2aeccecc777bf97cd490ed6ac728baba9535fac4b91dc3797c5a511124b5677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x8pr8" Jan 16 21:19:44.925490 kubelet[3994]: E0116 21:19:44.925466 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2aeccecc777bf97cd490ed6ac728baba9535fac4b91dc3797c5a511124b5677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x8pr8" Jan 16 21:19:44.925548 kubelet[3994]: E0116 21:19:44.925499 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-x8pr8_calico-system(2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-x8pr8_calico-system(2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2aeccecc777bf97cd490ed6ac728baba9535fac4b91dc3797c5a511124b5677\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:19:45.175561 containerd[2539]: time="2026-01-16T21:19:45.175507374Z" level=info msg="Container 6099e40004be7ae09877398b10bbb4c2c4cc19d3c9f8519117a30fed326ec915: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:19:45.252675 containerd[2539]: time="2026-01-16T21:19:45.252656642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pf8t,Uid:ad5a70af-916a-4e95-9866-1f1c8f4329d0,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:45.322801 containerd[2539]: time="2026-01-16T21:19:45.322780617Z" level=info msg="CreateContainer within sandbox \"76d1834f2612cf383562ad0acdf5f4160775ea513d47b6c4c0f61f7cd4781f4c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6099e40004be7ae09877398b10bbb4c2c4cc19d3c9f8519117a30fed326ec915\"" Jan 16 21:19:45.323155 containerd[2539]: time="2026-01-16T21:19:45.323135738Z" level=info msg="StartContainer for \"6099e40004be7ae09877398b10bbb4c2c4cc19d3c9f8519117a30fed326ec915\"" Jan 16 21:19:45.324434 containerd[2539]: time="2026-01-16T21:19:45.324413275Z" level=info msg="connecting to shim 6099e40004be7ae09877398b10bbb4c2c4cc19d3c9f8519117a30fed326ec915" address="unix:///run/containerd/s/e6a190a3bb24c25561ff53b7d2260f1269d8cedd87a7343a17943cb83e433797" protocol=ttrpc version=3 Jan 16 21:19:45.342091 systemd[1]: Started cri-containerd-6099e40004be7ae09877398b10bbb4c2c4cc19d3c9f8519117a30fed326ec915.scope - libcontainer container 6099e40004be7ae09877398b10bbb4c2c4cc19d3c9f8519117a30fed326ec915. Jan 16 21:19:45.374000 audit: BPF prog-id=195 op=LOAD Jan 16 21:19:45.374000 audit[5193]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4500 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:45.381730 kernel: audit: type=1334 audit(1768598385.374:585): prog-id=195 op=LOAD Jan 16 21:19:45.381764 kernel: audit: type=1300 audit(1768598385.374:585): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4500 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:45.390525 kernel: audit: type=1327 audit(1768598385.374:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630393965343030303462653761653039383737333938623130626262 Jan 16 21:19:45.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630393965343030303462653761653039383737333938623130626262 Jan 16 21:19:45.374000 audit: BPF prog-id=196 op=LOAD Jan 16 21:19:45.392308 kernel: audit: type=1334 audit(1768598385.374:586): prog-id=196 op=LOAD Jan 16 21:19:45.374000 audit[5193]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4500 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:45.396051 kernel: audit: type=1300 audit(1768598385.374:586): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4500 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:45.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630393965343030303462653761653039383737333938623130626262 Jan 16 21:19:45.400942 kernel: audit: type=1327 audit(1768598385.374:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630393965343030303462653761653039383737333938623130626262 Jan 16 21:19:45.402943 kernel: audit: type=1334 audit(1768598385.374:587): prog-id=196 op=UNLOAD Jan 16 21:19:45.374000 audit: BPF prog-id=196 op=UNLOAD Jan 16 21:19:45.374000 audit[5193]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:45.407611 kernel: audit: type=1300 audit(1768598385.374:587): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:45.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630393965343030303462653761653039383737333938623130626262 Jan 16 21:19:45.413735 kernel: audit: type=1327 audit(1768598385.374:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630393965343030303462653761653039383737333938623130626262 Jan 16 21:19:45.374000 audit: BPF prog-id=195 op=UNLOAD Jan 16 21:19:45.416364 kernel: audit: type=1334 audit(1768598385.374:588): prog-id=195 op=UNLOAD Jan 16 21:19:45.374000 audit[5193]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4500 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:45.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630393965343030303462653761653039383737333938623130626262 Jan 16 21:19:45.374000 audit: BPF prog-id=197 op=LOAD Jan 16 21:19:45.374000 audit[5193]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4500 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:45.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630393965343030303462653761653039383737333938623130626262 Jan 16 21:19:45.491713 containerd[2539]: time="2026-01-16T21:19:45.490892619Z" level=info msg="StartContainer for \"6099e40004be7ae09877398b10bbb4c2c4cc19d3c9f8519117a30fed326ec915\" returns successfully" Jan 16 21:19:45.519225 containerd[2539]: time="2026-01-16T21:19:45.519203084Z" level=error msg="Failed to destroy network for sandbox \"1e18365911e65d53b1835c4d0d50d789dd91b05d765671294c4948140d513ac9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:45.520654 systemd[1]: run-netns-cni\x2d2aeb7658\x2df604\x2d283c\x2d7095\x2db58deaa66274.mount: Deactivated successfully. Jan 16 21:19:45.573518 containerd[2539]: time="2026-01-16T21:19:45.573472517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pf8t,Uid:ad5a70af-916a-4e95-9866-1f1c8f4329d0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e18365911e65d53b1835c4d0d50d789dd91b05d765671294c4948140d513ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:45.573635 kubelet[3994]: E0116 21:19:45.573605 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e18365911e65d53b1835c4d0d50d789dd91b05d765671294c4948140d513ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:19:45.573824 kubelet[3994]: E0116 21:19:45.573653 3994 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e18365911e65d53b1835c4d0d50d789dd91b05d765671294c4948140d513ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pf8t" Jan 16 21:19:45.573824 kubelet[3994]: E0116 21:19:45.573669 3994 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e18365911e65d53b1835c4d0d50d789dd91b05d765671294c4948140d513ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pf8t" Jan 16 21:19:45.573824 kubelet[3994]: E0116 21:19:45.573702 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e18365911e65d53b1835c4d0d50d789dd91b05d765671294c4948140d513ac9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:19:45.675977 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 21:19:45.676036 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 21:19:45.815777 kubelet[3994]: I0116 21:19:45.815718 3994 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-whisker-ca-bundle\") pod \"aafa5b59-2cd9-4228-8f97-19effdd5c1a7\" (UID: \"aafa5b59-2cd9-4228-8f97-19effdd5c1a7\") " Jan 16 21:19:45.815777 kubelet[3994]: I0116 21:19:45.815751 3994 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-whisker-backend-key-pair\") pod \"aafa5b59-2cd9-4228-8f97-19effdd5c1a7\" (UID: \"aafa5b59-2cd9-4228-8f97-19effdd5c1a7\") " Jan 16 21:19:45.815777 kubelet[3994]: I0116 21:19:45.815769 3994 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9tlp\" (UniqueName: \"kubernetes.io/projected/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-kube-api-access-h9tlp\") pod \"aafa5b59-2cd9-4228-8f97-19effdd5c1a7\" (UID: \"aafa5b59-2cd9-4228-8f97-19effdd5c1a7\") " Jan 16 21:19:45.816148 kubelet[3994]: I0116 21:19:45.816062 3994 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "aafa5b59-2cd9-4228-8f97-19effdd5c1a7" (UID: "aafa5b59-2cd9-4228-8f97-19effdd5c1a7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 21:19:45.821481 systemd[1]: var-lib-kubelet-pods-aafa5b59\x2d2cd9\x2d4228\x2d8f97\x2d19effdd5c1a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh9tlp.mount: Deactivated successfully. Jan 16 21:19:45.821565 systemd[1]: var-lib-kubelet-pods-aafa5b59\x2d2cd9\x2d4228\x2d8f97\x2d19effdd5c1a7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 21:19:45.823543 kubelet[3994]: I0116 21:19:45.823517 3994 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "aafa5b59-2cd9-4228-8f97-19effdd5c1a7" (UID: "aafa5b59-2cd9-4228-8f97-19effdd5c1a7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 21:19:45.823611 kubelet[3994]: I0116 21:19:45.823575 3994 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-kube-api-access-h9tlp" (OuterVolumeSpecName: "kube-api-access-h9tlp") pod "aafa5b59-2cd9-4228-8f97-19effdd5c1a7" (UID: "aafa5b59-2cd9-4228-8f97-19effdd5c1a7"). InnerVolumeSpecName "kube-api-access-h9tlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 21:19:45.916434 kubelet[3994]: I0116 21:19:45.916405 3994 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-whisker-ca-bundle\") on node \"ci-4580.0.0-p-452f1e7704\" DevicePath \"\"" Jan 16 21:19:45.916434 kubelet[3994]: I0116 21:19:45.916429 3994 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-whisker-backend-key-pair\") on node \"ci-4580.0.0-p-452f1e7704\" DevicePath \"\"" Jan 16 21:19:45.916519 kubelet[3994]: I0116 21:19:45.916438 3994 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h9tlp\" (UniqueName: \"kubernetes.io/projected/aafa5b59-2cd9-4228-8f97-19effdd5c1a7-kube-api-access-h9tlp\") on node \"ci-4580.0.0-p-452f1e7704\" DevicePath \"\"" Jan 16 21:19:46.253548 containerd[2539]: time="2026-01-16T21:19:46.253430405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pb2m9,Uid:d443c368-ce47-4d9c-990c-7ed720465c2e,Namespace:kube-system,Attempt:0,}" Jan 16 21:19:46.257843 systemd[1]: Removed slice kubepods-besteffort-podaafa5b59_2cd9_4228_8f97_19effdd5c1a7.slice - libcontainer container kubepods-besteffort-podaafa5b59_2cd9_4228_8f97_19effdd5c1a7.slice. Jan 16 21:19:46.406562 kubelet[3994]: I0116 21:19:46.406512 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t8zsr" podStartSLOduration=2.2006541410000002 podStartE2EDuration="38.406499442s" podCreationTimestamp="2026-01-16 21:19:08 +0000 UTC" firstStartedPulling="2026-01-16 21:19:08.623554389 +0000 UTC m=+22.471041303" lastFinishedPulling="2026-01-16 21:19:44.829399684 +0000 UTC m=+58.676886604" observedRunningTime="2026-01-16 21:19:46.404663658 +0000 UTC m=+60.252150573" watchObservedRunningTime="2026-01-16 21:19:46.406499442 +0000 UTC m=+60.253986358" Jan 16 21:19:46.418796 systemd-networkd[2320]: cali6f55be51274: Link UP Jan 16 21:19:46.418995 systemd-networkd[2320]: cali6f55be51274: Gained carrier Jan 16 21:19:46.433891 containerd[2539]: 2026-01-16 21:19:46.334 [INFO][5283] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:19:46.433891 containerd[2539]: 2026-01-16 21:19:46.340 [INFO][5283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0 coredns-668d6bf9bc- kube-system d443c368-ce47-4d9c-990c-7ed720465c2e 856 0 2026-01-16 21:18:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4580.0.0-p-452f1e7704 coredns-668d6bf9bc-pb2m9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f55be51274 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-pb2m9" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-" Jan 16 21:19:46.433891 containerd[2539]: 2026-01-16 21:19:46.340 [INFO][5283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-pb2m9" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" Jan 16 21:19:46.433891 containerd[2539]: 2026-01-16 21:19:46.359 [INFO][5295] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" HandleID="k8s-pod-network.6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Workload="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" Jan 16 21:19:46.434050 containerd[2539]: 2026-01-16 21:19:46.359 [INFO][5295] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" HandleID="k8s-pod-network.6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Workload="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4580.0.0-p-452f1e7704", "pod":"coredns-668d6bf9bc-pb2m9", "timestamp":"2026-01-16 21:19:46.359122954 +0000 UTC"}, Hostname:"ci-4580.0.0-p-452f1e7704", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:19:46.434050 containerd[2539]: 2026-01-16 21:19:46.359 [INFO][5295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:19:46.434050 containerd[2539]: 2026-01-16 21:19:46.359 [INFO][5295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:19:46.434050 containerd[2539]: 2026-01-16 21:19:46.359 [INFO][5295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-452f1e7704' Jan 16 21:19:46.434050 containerd[2539]: 2026-01-16 21:19:46.366 [INFO][5295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:46.434050 containerd[2539]: 2026-01-16 21:19:46.370 [INFO][5295] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:46.434050 containerd[2539]: 2026-01-16 21:19:46.374 [INFO][5295] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:46.434050 containerd[2539]: 2026-01-16 21:19:46.376 [INFO][5295] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:46.434050 containerd[2539]: 2026-01-16 21:19:46.378 [INFO][5295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:46.434244 containerd[2539]: 2026-01-16 21:19:46.378 [INFO][5295] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:46.434244 containerd[2539]: 2026-01-16 21:19:46.380 [INFO][5295] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed Jan 16 21:19:46.434244 containerd[2539]: 2026-01-16 21:19:46.388 [INFO][5295] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:46.434244 containerd[2539]: 2026-01-16 21:19:46.398 [INFO][5295] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.65/26] block=192.168.56.64/26 handle="k8s-pod-network.6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:46.434244 containerd[2539]: 2026-01-16 21:19:46.398 [INFO][5295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.65/26] handle="k8s-pod-network.6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:46.434244 containerd[2539]: 2026-01-16 21:19:46.398 [INFO][5295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:19:46.434244 containerd[2539]: 2026-01-16 21:19:46.398 [INFO][5295] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.65/26] IPv6=[] ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" HandleID="k8s-pod-network.6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Workload="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" Jan 16 21:19:46.434376 containerd[2539]: 2026-01-16 21:19:46.402 [INFO][5283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-pb2m9" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d443c368-ce47-4d9c-990c-7ed720465c2e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"", Pod:"coredns-668d6bf9bc-pb2m9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f55be51274", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:46.434376 containerd[2539]: 2026-01-16 21:19:46.402 [INFO][5283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.65/32] ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-pb2m9" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" Jan 16 21:19:46.434376 containerd[2539]: 2026-01-16 21:19:46.402 [INFO][5283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f55be51274 ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-pb2m9" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" Jan 16 21:19:46.434376 containerd[2539]: 2026-01-16 21:19:46.417 [INFO][5283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-pb2m9" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" Jan 16 21:19:46.434376 containerd[2539]: 2026-01-16 21:19:46.417 [INFO][5283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-pb2m9" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d443c368-ce47-4d9c-990c-7ed720465c2e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed", Pod:"coredns-668d6bf9bc-pb2m9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f55be51274", MAC:"de:82:9a:59:df:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:46.434376 containerd[2539]: 2026-01-16 21:19:46.431 [INFO][5283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-pb2m9" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--pb2m9-eth0" Jan 16 21:19:46.485948 systemd[1]: Created slice kubepods-besteffort-pod6049a9b7_c23b_4cd6_b5e0_33d5a278ce35.slice - libcontainer container kubepods-besteffort-pod6049a9b7_c23b_4cd6_b5e0_33d5a278ce35.slice. Jan 16 21:19:46.520673 kubelet[3994]: I0116 21:19:46.520558 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6049a9b7-c23b-4cd6-b5e0-33d5a278ce35-whisker-ca-bundle\") pod \"whisker-589885c4ff-m24sm\" (UID: \"6049a9b7-c23b-4cd6-b5e0-33d5a278ce35\") " pod="calico-system/whisker-589885c4ff-m24sm" Jan 16 21:19:46.520673 kubelet[3994]: I0116 21:19:46.520587 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gwbq\" (UniqueName: \"kubernetes.io/projected/6049a9b7-c23b-4cd6-b5e0-33d5a278ce35-kube-api-access-4gwbq\") pod \"whisker-589885c4ff-m24sm\" (UID: \"6049a9b7-c23b-4cd6-b5e0-33d5a278ce35\") " pod="calico-system/whisker-589885c4ff-m24sm" Jan 16 21:19:46.520673 kubelet[3994]: I0116 21:19:46.520612 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6049a9b7-c23b-4cd6-b5e0-33d5a278ce35-whisker-backend-key-pair\") pod \"whisker-589885c4ff-m24sm\" (UID: \"6049a9b7-c23b-4cd6-b5e0-33d5a278ce35\") " pod="calico-system/whisker-589885c4ff-m24sm" Jan 16 21:19:46.968100 containerd[2539]: time="2026-01-16T21:19:46.968027009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589885c4ff-m24sm,Uid:6049a9b7-c23b-4cd6-b5e0-33d5a278ce35,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:47.796937 systemd-networkd[2320]: cali6f55be51274: Gained IPv6LL Jan 16 21:19:48.254326 kubelet[3994]: I0116 21:19:48.254305 3994 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aafa5b59-2cd9-4228-8f97-19effdd5c1a7" path="/var/lib/kubelet/pods/aafa5b59-2cd9-4228-8f97-19effdd5c1a7/volumes" Jan 16 21:19:48.641000 audit: BPF prog-id=198 op=LOAD Jan 16 21:19:48.641000 audit[5480]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdded1d6d0 a2=98 a3=1fffffffffffffff items=0 ppid=5417 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.641000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:19:48.641000 audit: BPF prog-id=198 op=UNLOAD Jan 16 21:19:48.641000 audit[5480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdded1d6a0 a3=0 items=0 ppid=5417 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.641000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:19:48.641000 audit: BPF prog-id=199 op=LOAD Jan 16 21:19:48.641000 audit[5480]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdded1d5b0 a2=94 a3=3 items=0 ppid=5417 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.641000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:19:48.641000 audit: BPF prog-id=199 op=UNLOAD Jan 16 21:19:48.641000 audit[5480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdded1d5b0 a2=94 a3=3 items=0 ppid=5417 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.641000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:19:48.641000 audit: BPF prog-id=200 op=LOAD Jan 16 21:19:48.641000 audit[5480]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdded1d5f0 a2=94 a3=7ffdded1d7d0 items=0 ppid=5417 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.641000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:19:48.641000 audit: BPF prog-id=200 op=UNLOAD Jan 16 21:19:48.641000 audit[5480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdded1d5f0 a2=94 a3=7ffdded1d7d0 items=0 ppid=5417 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.641000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:19:48.642000 audit: BPF prog-id=201 op=LOAD Jan 16 21:19:48.642000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb229d2f0 a2=98 a3=3 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.642000 audit: BPF prog-id=201 op=UNLOAD Jan 16 21:19:48.642000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdb229d2c0 a3=0 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.642000 audit: BPF prog-id=202 op=LOAD Jan 16 21:19:48.642000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdb229d0e0 a2=94 a3=54428f items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.642000 audit: BPF prog-id=202 op=UNLOAD Jan 16 21:19:48.642000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdb229d0e0 a2=94 a3=54428f items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.642000 audit: BPF prog-id=203 op=LOAD Jan 16 21:19:48.642000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdb229d110 a2=94 a3=2 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.642000 audit: BPF prog-id=203 op=UNLOAD Jan 16 21:19:48.642000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdb229d110 a2=0 a3=2 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.744000 audit: BPF prog-id=204 op=LOAD Jan 16 21:19:48.744000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdb229cfd0 a2=94 a3=1 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.744000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.744000 audit: BPF prog-id=204 op=UNLOAD Jan 16 21:19:48.744000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdb229cfd0 a2=94 a3=1 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.744000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.752000 audit: BPF prog-id=205 op=LOAD Jan 16 21:19:48.752000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdb229cfc0 a2=94 a3=4 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.752000 audit: BPF prog-id=205 op=UNLOAD Jan 16 21:19:48.752000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdb229cfc0 a2=0 a3=4 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.752000 audit: BPF prog-id=206 op=LOAD Jan 16 21:19:48.752000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb229ce20 a2=94 a3=5 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.752000 audit: BPF prog-id=206 op=UNLOAD Jan 16 21:19:48.752000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb229ce20 a2=0 a3=5 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.752000 audit: BPF prog-id=207 op=LOAD Jan 16 21:19:48.752000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdb229d040 a2=94 a3=6 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.752000 audit: BPF prog-id=207 op=UNLOAD Jan 16 21:19:48.752000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdb229d040 a2=0 a3=6 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.752000 audit: BPF prog-id=208 op=LOAD Jan 16 21:19:48.752000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdb229c7f0 a2=94 a3=88 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.753000 audit: BPF prog-id=209 op=LOAD Jan 16 21:19:48.753000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdb229c670 a2=94 a3=2 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.753000 audit: BPF prog-id=209 op=UNLOAD Jan 16 21:19:48.753000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdb229c6a0 a2=0 a3=7ffdb229c7a0 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.753000 audit: BPF prog-id=208 op=UNLOAD Jan 16 21:19:48.753000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=30d3cd10 a2=0 a3=c4245608cff152c1 items=0 ppid=5417 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:19:48.758000 audit: BPF prog-id=210 op=LOAD Jan 16 21:19:48.758000 audit[5484]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1f429e40 a2=98 a3=1999999999999999 items=0 ppid=5417 pid=5484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.758000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:19:48.758000 audit: BPF prog-id=210 op=UNLOAD Jan 16 21:19:48.758000 audit[5484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe1f429e10 a3=0 items=0 ppid=5417 pid=5484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.758000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:19:48.758000 audit: BPF prog-id=211 op=LOAD Jan 16 21:19:48.758000 audit[5484]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1f429d20 a2=94 a3=ffff items=0 ppid=5417 pid=5484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.758000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:19:48.758000 audit: BPF prog-id=211 op=UNLOAD Jan 16 21:19:48.758000 audit[5484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe1f429d20 a2=94 a3=ffff items=0 ppid=5417 pid=5484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.758000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:19:48.758000 audit: BPF prog-id=212 op=LOAD Jan 16 21:19:48.758000 audit[5484]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1f429d60 a2=94 a3=7ffe1f429f40 items=0 ppid=5417 pid=5484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.758000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:19:48.759000 audit: BPF prog-id=212 op=UNLOAD Jan 16 21:19:48.759000 audit[5484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe1f429d60 a2=94 a3=7ffe1f429f40 items=0 ppid=5417 pid=5484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:48.759000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:19:49.231924 systemd-networkd[2320]: vxlan.calico: Link UP Jan 16 21:19:49.231932 systemd-networkd[2320]: vxlan.calico: Gained carrier Jan 16 21:19:49.243000 audit: BPF prog-id=213 op=LOAD Jan 16 21:19:49.243000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea400b930 a2=98 a3=0 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.245000 audit: BPF prog-id=213 op=UNLOAD Jan 16 21:19:49.245000 audit[5508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffea400b900 a3=0 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.245000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.245000 audit: BPF prog-id=214 op=LOAD Jan 16 21:19:49.245000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea400b740 a2=94 a3=54428f items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.245000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.245000 audit: BPF prog-id=214 op=UNLOAD Jan 16 21:19:49.245000 audit[5508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffea400b740 a2=94 a3=54428f items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.245000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.245000 audit: BPF prog-id=215 op=LOAD Jan 16 21:19:49.245000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea400b770 a2=94 a3=2 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.245000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.245000 audit: BPF prog-id=215 op=UNLOAD Jan 16 21:19:49.245000 audit[5508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffea400b770 a2=0 a3=2 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.245000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.245000 audit: BPF prog-id=216 op=LOAD Jan 16 21:19:49.245000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffea400b520 a2=94 a3=4 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.245000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.245000 audit: BPF prog-id=216 op=UNLOAD Jan 16 21:19:49.245000 audit[5508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffea400b520 a2=94 a3=4 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.245000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.245000 audit: BPF prog-id=217 op=LOAD Jan 16 21:19:49.245000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffea400b620 a2=94 a3=7ffea400b7a0 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.245000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.245000 audit: BPF prog-id=217 op=UNLOAD Jan 16 21:19:49.245000 audit[5508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffea400b620 a2=0 a3=7ffea400b7a0 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.245000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.246000 audit: BPF prog-id=218 op=LOAD Jan 16 21:19:49.246000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffea400ad50 a2=94 a3=2 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.246000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.246000 audit: BPF prog-id=218 op=UNLOAD Jan 16 21:19:49.246000 audit[5508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffea400ad50 a2=0 a3=2 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.246000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.246000 audit: BPF prog-id=219 op=LOAD Jan 16 21:19:49.246000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffea400ae50 a2=94 a3=30 items=0 ppid=5417 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.246000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:19:49.253000 audit: BPF prog-id=220 op=LOAD Jan 16 21:19:49.253000 audit[5515]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0af5f130 a2=98 a3=0 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.253000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.253000 audit: BPF prog-id=220 op=UNLOAD Jan 16 21:19:49.253000 audit[5515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff0af5f100 a3=0 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.253000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.253000 audit: BPF prog-id=221 op=LOAD Jan 16 21:19:49.253000 audit[5515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0af5ef20 a2=94 a3=54428f items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.253000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.253000 audit: BPF prog-id=221 op=UNLOAD Jan 16 21:19:49.253000 audit[5515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0af5ef20 a2=94 a3=54428f items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.253000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.253000 audit: BPF prog-id=222 op=LOAD Jan 16 21:19:49.253000 audit[5515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0af5ef50 a2=94 a3=2 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.253000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.253000 audit: BPF prog-id=222 op=UNLOAD Jan 16 21:19:49.253000 audit[5515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0af5ef50 a2=0 a3=2 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.253000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.362000 audit: BPF prog-id=223 op=LOAD Jan 16 21:19:49.362000 audit[5515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0af5ee10 a2=94 a3=1 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.362000 audit: BPF prog-id=223 op=UNLOAD Jan 16 21:19:49.362000 audit[5515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0af5ee10 a2=94 a3=1 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.370000 audit: BPF prog-id=224 op=LOAD Jan 16 21:19:49.370000 audit[5515]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0af5ee00 a2=94 a3=4 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.370000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.370000 audit: BPF prog-id=224 op=UNLOAD Jan 16 21:19:49.370000 audit[5515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0af5ee00 a2=0 a3=4 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.370000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.370000 audit: BPF prog-id=225 op=LOAD Jan 16 21:19:49.370000 audit[5515]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff0af5ec60 a2=94 a3=5 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.370000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.370000 audit: BPF prog-id=225 op=UNLOAD Jan 16 21:19:49.370000 audit[5515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff0af5ec60 a2=0 a3=5 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.370000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.371000 audit: BPF prog-id=226 op=LOAD Jan 16 21:19:49.371000 audit[5515]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0af5ee80 a2=94 a3=6 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.371000 audit: BPF prog-id=226 op=UNLOAD Jan 16 21:19:49.371000 audit[5515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0af5ee80 a2=0 a3=6 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.371000 audit: BPF prog-id=227 op=LOAD Jan 16 21:19:49.371000 audit[5515]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0af5e630 a2=94 a3=88 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.371000 audit: BPF prog-id=228 op=LOAD Jan 16 21:19:49.371000 audit[5515]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff0af5e4b0 a2=94 a3=2 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.371000 audit: BPF prog-id=228 op=UNLOAD Jan 16 21:19:49.371000 audit[5515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff0af5e4e0 a2=0 a3=7fff0af5e5e0 items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.371000 audit: BPF prog-id=227 op=UNLOAD Jan 16 21:19:49.371000 audit[5515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=26e0ed10 a2=0 a3=7c5ac58ba866fbef items=0 ppid=5417 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:19:49.375000 audit: BPF prog-id=219 op=UNLOAD Jan 16 21:19:49.375000 audit[5417]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0003722c0 a2=0 a3=0 items=0 ppid=5336 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.375000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 16 21:19:49.398795 containerd[2539]: time="2026-01-16T21:19:49.398680548Z" level=info msg="connecting to shim 6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed" address="unix:///run/containerd/s/31d5691b537ad2655bcce05ed429ec018803e51a0bf1f22a4624577c320afd08" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:49.423008 systemd[1]: Started cri-containerd-6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed.scope - libcontainer container 6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed. Jan 16 21:19:49.430000 audit: BPF prog-id=229 op=LOAD Jan 16 21:19:49.430000 audit: BPF prog-id=230 op=LOAD Jan 16 21:19:49.430000 audit[5546]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5533 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343031306332333962366563626465656438643537343938316665 Jan 16 21:19:49.430000 audit: BPF prog-id=230 op=UNLOAD Jan 16 21:19:49.430000 audit[5546]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343031306332333962366563626465656438643537343938316665 Jan 16 21:19:49.431000 audit: BPF prog-id=231 op=LOAD Jan 16 21:19:49.431000 audit[5546]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5533 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343031306332333962366563626465656438643537343938316665 Jan 16 21:19:49.431000 audit: BPF prog-id=232 op=LOAD Jan 16 21:19:49.431000 audit[5546]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5533 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343031306332333962366563626465656438643537343938316665 Jan 16 21:19:49.431000 audit: BPF prog-id=232 op=UNLOAD Jan 16 21:19:49.431000 audit[5546]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343031306332333962366563626465656438643537343938316665 Jan 16 21:19:49.431000 audit: BPF prog-id=231 op=UNLOAD Jan 16 21:19:49.431000 audit[5546]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343031306332333962366563626465656438643537343938316665 Jan 16 21:19:49.431000 audit: BPF prog-id=233 op=LOAD Jan 16 21:19:49.431000 audit[5546]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5533 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343031306332333962366563626465656438643537343938316665 Jan 16 21:19:49.589000 audit[5584]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=5584 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:49.589000 audit[5584]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffed053ef00 a2=0 a3=7ffed053eeec items=0 ppid=5417 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.589000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:49.590000 audit[5585]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5585 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:49.590000 audit[5585]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffde1696210 a2=0 a3=7ffde16961fc items=0 ppid=5417 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.590000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:49.619000 audit[5582]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:49.619000 audit[5582]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcb9e9f0a0 a2=0 a3=7ffcb9e9f08c items=0 ppid=5417 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.619000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:49.681050 containerd[2539]: time="2026-01-16T21:19:49.681030248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pb2m9,Uid:d443c368-ce47-4d9c-990c-7ed720465c2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed\"" Jan 16 21:19:49.684243 containerd[2539]: time="2026-01-16T21:19:49.684222496Z" level=info msg="CreateContainer within sandbox \"6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 21:19:49.620000 audit[5587]: NETFILTER_CFG table=filter:125 family=2 entries=73 op=nft_register_chain pid=5587 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:49.620000 audit[5587]: SYSCALL arch=c000003e syscall=46 success=yes exit=38620 a0=3 a1=7ffe9919f4e0 a2=0 a3=7ffe9919f4cc items=0 ppid=5417 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.620000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:49.939757 systemd-networkd[2320]: calid7c710219b5: Link UP Jan 16 21:19:49.940413 systemd-networkd[2320]: calid7c710219b5: Gained carrier Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.893 [INFO][5598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0 whisker-589885c4ff- calico-system 6049a9b7-c23b-4cd6-b5e0-33d5a278ce35 952 0 2026-01-16 21:19:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:589885c4ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4580.0.0-p-452f1e7704 whisker-589885c4ff-m24sm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid7c710219b5 [] [] }} ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Namespace="calico-system" Pod="whisker-589885c4ff-m24sm" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.893 [INFO][5598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Namespace="calico-system" Pod="whisker-589885c4ff-m24sm" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.912 [INFO][5612] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" HandleID="k8s-pod-network.dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Workload="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.912 [INFO][5612] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" HandleID="k8s-pod-network.dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Workload="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580.0.0-p-452f1e7704", "pod":"whisker-589885c4ff-m24sm", "timestamp":"2026-01-16 21:19:49.912005115 +0000 UTC"}, Hostname:"ci-4580.0.0-p-452f1e7704", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.912 [INFO][5612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.912 [INFO][5612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.912 [INFO][5612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-452f1e7704' Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.916 [INFO][5612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.919 [INFO][5612] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.922 [INFO][5612] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.923 [INFO][5612] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.925 [INFO][5612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.925 [INFO][5612] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.926 [INFO][5612] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.932 [INFO][5612] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.936 [INFO][5612] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.66/26] block=192.168.56.64/26 handle="k8s-pod-network.dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.936 [INFO][5612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.66/26] handle="k8s-pod-network.dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.936 [INFO][5612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:19:49.954274 containerd[2539]: 2026-01-16 21:19:49.936 [INFO][5612] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.66/26] IPv6=[] ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" HandleID="k8s-pod-network.dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Workload="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" Jan 16 21:19:49.954853 containerd[2539]: 2026-01-16 21:19:49.937 [INFO][5598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Namespace="calico-system" Pod="whisker-589885c4ff-m24sm" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0", GenerateName:"whisker-589885c4ff-", Namespace:"calico-system", SelfLink:"", UID:"6049a9b7-c23b-4cd6-b5e0-33d5a278ce35", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"589885c4ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"", Pod:"whisker-589885c4ff-m24sm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid7c710219b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:49.954853 containerd[2539]: 2026-01-16 21:19:49.938 [INFO][5598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.66/32] ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Namespace="calico-system" Pod="whisker-589885c4ff-m24sm" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" Jan 16 21:19:49.954853 containerd[2539]: 2026-01-16 21:19:49.938 [INFO][5598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7c710219b5 ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Namespace="calico-system" Pod="whisker-589885c4ff-m24sm" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" Jan 16 21:19:49.954853 containerd[2539]: 2026-01-16 21:19:49.940 [INFO][5598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Namespace="calico-system" Pod="whisker-589885c4ff-m24sm" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" Jan 16 21:19:49.954853 containerd[2539]: 2026-01-16 21:19:49.941 [INFO][5598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Namespace="calico-system" Pod="whisker-589885c4ff-m24sm" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0", GenerateName:"whisker-589885c4ff-", Namespace:"calico-system", SelfLink:"", UID:"6049a9b7-c23b-4cd6-b5e0-33d5a278ce35", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"589885c4ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b", Pod:"whisker-589885c4ff-m24sm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid7c710219b5", MAC:"ba:12:20:21:87:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:49.954853 containerd[2539]: 2026-01-16 21:19:49.949 [INFO][5598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" Namespace="calico-system" Pod="whisker-589885c4ff-m24sm" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-whisker--589885c4ff--m24sm-eth0" Jan 16 21:19:49.963000 audit[5626]: NETFILTER_CFG table=filter:126 family=2 entries=63 op=nft_register_chain pid=5626 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:49.963000 audit[5626]: SYSCALL arch=c000003e syscall=46 success=yes exit=37048 a0=3 a1=7ffdf55fbd50 a2=0 a3=7ffdf55fbd3c items=0 ppid=5417 pid=5626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:49.963000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:50.084004 containerd[2539]: time="2026-01-16T21:19:50.083313560Z" level=info msg="Container 8999520be52250a651400636715390cce7f98bf073551db285efb82374b45e8a: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:19:50.428512 containerd[2539]: time="2026-01-16T21:19:50.428491448Z" level=info msg="CreateContainer within sandbox \"6b4010c239b6ecbdeed8d574981fe407eb9472a5ae0aa8d92bff266f0a2168ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8999520be52250a651400636715390cce7f98bf073551db285efb82374b45e8a\"" Jan 16 21:19:50.428972 containerd[2539]: time="2026-01-16T21:19:50.428952362Z" level=info msg="StartContainer for \"8999520be52250a651400636715390cce7f98bf073551db285efb82374b45e8a\"" Jan 16 21:19:50.429621 containerd[2539]: time="2026-01-16T21:19:50.429597529Z" level=info msg="connecting to shim 8999520be52250a651400636715390cce7f98bf073551db285efb82374b45e8a" address="unix:///run/containerd/s/31d5691b537ad2655bcce05ed429ec018803e51a0bf1f22a4624577c320afd08" protocol=ttrpc version=3 Jan 16 21:19:50.447021 systemd[1]: Started cri-containerd-8999520be52250a651400636715390cce7f98bf073551db285efb82374b45e8a.scope - libcontainer container 8999520be52250a651400636715390cce7f98bf073551db285efb82374b45e8a. Jan 16 21:19:50.454594 containerd[2539]: time="2026-01-16T21:19:50.454571038Z" level=info msg="connecting to shim dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b" address="unix:///run/containerd/s/a5d9c9c184dd36c577279778b1b347bcb2e155c4516990235d62f6126fd19848" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:50.467000 audit: BPF prog-id=234 op=LOAD Jan 16 21:19:50.469288 kernel: kauditd_printk_skb: 228 callbacks suppressed Jan 16 21:19:50.469350 kernel: audit: type=1334 audit(1768598390.467:665): prog-id=234 op=LOAD Jan 16 21:19:50.471000 audit: BPF prog-id=235 op=LOAD Jan 16 21:19:50.474906 kernel: audit: type=1334 audit(1768598390.471:666): prog-id=235 op=LOAD Jan 16 21:19:50.471000 audit[5628]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.484728 kernel: audit: type=1300 audit(1768598390.471:666): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.484786 kernel: audit: type=1327 audit(1768598390.471:666): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.484991 systemd[1]: Started cri-containerd-dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b.scope - libcontainer container dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b. Jan 16 21:19:50.487943 kernel: audit: type=1334 audit(1768598390.471:667): prog-id=235 op=UNLOAD Jan 16 21:19:50.471000 audit: BPF prog-id=235 op=UNLOAD Jan 16 21:19:50.492183 kernel: audit: type=1300 audit(1768598390.471:667): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.471000 audit[5628]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.496730 kernel: audit: type=1327 audit(1768598390.471:667): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.471000 audit: BPF prog-id=236 op=LOAD Jan 16 21:19:50.502520 kernel: audit: type=1334 audit(1768598390.471:668): prog-id=236 op=LOAD Jan 16 21:19:50.502574 kernel: audit: type=1300 audit(1768598390.471:668): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.471000 audit[5628]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.506719 kernel: audit: type=1327 audit(1768598390.471:668): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.471000 audit: BPF prog-id=237 op=LOAD Jan 16 21:19:50.471000 audit[5628]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.471000 audit: BPF prog-id=237 op=UNLOAD Jan 16 21:19:50.471000 audit[5628]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.471000 audit: BPF prog-id=236 op=UNLOAD Jan 16 21:19:50.471000 audit[5628]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.471000 audit: BPF prog-id=238 op=LOAD Jan 16 21:19:50.471000 audit[5628]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5533 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839393935323062653532323530613635313430303633363731353339 Jan 16 21:19:50.516000 audit: BPF prog-id=239 op=LOAD Jan 16 21:19:50.516000 audit: BPF prog-id=240 op=LOAD Jan 16 21:19:50.516000 audit[5666]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5647 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461653635656464656138346434323664376135373732666362366236 Jan 16 21:19:50.516000 audit: BPF prog-id=240 op=UNLOAD Jan 16 21:19:50.516000 audit[5666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5647 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461653635656464656138346434323664376135373732666362366236 Jan 16 21:19:50.516000 audit: BPF prog-id=241 op=LOAD Jan 16 21:19:50.516000 audit[5666]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5647 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461653635656464656138346434323664376135373732666362366236 Jan 16 21:19:50.516000 audit: BPF prog-id=242 op=LOAD Jan 16 21:19:50.516000 audit[5666]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5647 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461653635656464656138346434323664376135373732666362366236 Jan 16 21:19:50.516000 audit: BPF prog-id=242 op=UNLOAD Jan 16 21:19:50.516000 audit[5666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5647 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461653635656464656138346434323664376135373732666362366236 Jan 16 21:19:50.516000 audit: BPF prog-id=241 op=UNLOAD Jan 16 21:19:50.516000 audit[5666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5647 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461653635656464656138346434323664376135373732666362366236 Jan 16 21:19:50.517000 audit: BPF prog-id=243 op=LOAD Jan 16 21:19:50.517000 audit[5666]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5647 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:50.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461653635656464656138346434323664376135373732666362366236 Jan 16 21:19:50.526155 containerd[2539]: time="2026-01-16T21:19:50.526135175Z" level=info msg="StartContainer for \"8999520be52250a651400636715390cce7f98bf073551db285efb82374b45e8a\" returns successfully" Jan 16 21:19:50.549243 systemd-networkd[2320]: vxlan.calico: Gained IPv6LL Jan 16 21:19:50.561040 containerd[2539]: time="2026-01-16T21:19:50.561020489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589885c4ff-m24sm,Uid:6049a9b7-c23b-4cd6-b5e0-33d5a278ce35,Namespace:calico-system,Attempt:0,} returns sandbox id \"dae65eddea84d426d7a5772fcb6b61104c26a678bec6a5cb41dbee1b4eae410b\"" Jan 16 21:19:50.562180 containerd[2539]: time="2026-01-16T21:19:50.562163440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:19:50.823228 containerd[2539]: time="2026-01-16T21:19:50.823142220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:19:50.825667 containerd[2539]: time="2026-01-16T21:19:50.825622396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:19:50.825667 containerd[2539]: time="2026-01-16T21:19:50.825644790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:19:50.825771 kubelet[3994]: E0116 21:19:50.825744 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:19:50.826076 kubelet[3994]: E0116 21:19:50.825782 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:19:50.826104 kubelet[3994]: E0116 21:19:50.825909 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4e06177460524888ac9b32b3cfd69bc3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4gwbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589885c4ff-m24sm_calico-system(6049a9b7-c23b-4cd6-b5e0-33d5a278ce35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:19:50.828621 containerd[2539]: time="2026-01-16T21:19:50.828586564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:19:51.060920 systemd-networkd[2320]: calid7c710219b5: Gained IPv6LL Jan 16 21:19:51.080885 containerd[2539]: time="2026-01-16T21:19:51.080792714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:19:51.083210 containerd[2539]: time="2026-01-16T21:19:51.083184547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:19:51.083284 containerd[2539]: time="2026-01-16T21:19:51.083196903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:19:51.083433 kubelet[3994]: E0116 21:19:51.083410 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:19:51.083480 kubelet[3994]: E0116 21:19:51.083438 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:19:51.083564 kubelet[3994]: E0116 21:19:51.083536 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gwbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589885c4ff-m24sm_calico-system(6049a9b7-c23b-4cd6-b5e0-33d5a278ce35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:19:51.084825 kubelet[3994]: E0116 21:19:51.084770 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:19:51.390135 kubelet[3994]: E0116 21:19:51.389896 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:19:51.412217 kubelet[3994]: I0116 21:19:51.412148 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pb2m9" podStartSLOduration=65.412125365 podStartE2EDuration="1m5.412125365s" podCreationTimestamp="2026-01-16 21:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:19:51.400093305 +0000 UTC m=+65.247580224" watchObservedRunningTime="2026-01-16 21:19:51.412125365 +0000 UTC m=+65.259612283" Jan 16 21:19:51.419000 audit[5709]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:51.419000 audit[5709]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe35e95630 a2=0 a3=7ffe35e9561c items=0 ppid=4151 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:51.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:51.425000 audit[5709]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:51.425000 audit[5709]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe35e95630 a2=0 a3=0 items=0 ppid=4151 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:51.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:51.443000 audit[5711]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=5711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:51.443000 audit[5711]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde0ae4d90 a2=0 a3=7ffde0ae4d7c items=0 ppid=4151 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:51.443000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:51.446000 audit[5711]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=5711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:51.446000 audit[5711]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffde0ae4d90 a2=0 a3=7ffde0ae4d7c items=0 ppid=4151 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:51.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:52.391320 kubelet[3994]: E0116 21:19:52.391267 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:19:55.253035 containerd[2539]: time="2026-01-16T21:19:55.252986572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fcf4db84-9ph5q,Uid:2ea28179-07a2-4a0c-9cc7-b4eadca6090c,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:19:55.335515 systemd-networkd[2320]: cali6819cd81150: Link UP Jan 16 21:19:55.336344 systemd-networkd[2320]: cali6819cd81150: Gained carrier Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.287 [INFO][5722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0 calico-apiserver-64fcf4db84- calico-apiserver 2ea28179-07a2-4a0c-9cc7-b4eadca6090c 855 0 2026-01-16 21:19:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64fcf4db84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4580.0.0-p-452f1e7704 calico-apiserver-64fcf4db84-9ph5q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6819cd81150 [] [] }} ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Namespace="calico-apiserver" Pod="calico-apiserver-64fcf4db84-9ph5q" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.287 [INFO][5722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Namespace="calico-apiserver" Pod="calico-apiserver-64fcf4db84-9ph5q" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.306 [INFO][5733] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" HandleID="k8s-pod-network.d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.306 [INFO][5733] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" HandleID="k8s-pod-network.d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5910), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4580.0.0-p-452f1e7704", "pod":"calico-apiserver-64fcf4db84-9ph5q", "timestamp":"2026-01-16 21:19:55.306251017 +0000 UTC"}, Hostname:"ci-4580.0.0-p-452f1e7704", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.306 [INFO][5733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.306 [INFO][5733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.306 [INFO][5733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-452f1e7704' Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.310 [INFO][5733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.313 [INFO][5733] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.315 [INFO][5733] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.317 [INFO][5733] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.318 [INFO][5733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.318 [INFO][5733] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.319 [INFO][5733] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.323 [INFO][5733] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.330 [INFO][5733] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.67/26] block=192.168.56.64/26 handle="k8s-pod-network.d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.330 [INFO][5733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.67/26] handle="k8s-pod-network.d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.330 [INFO][5733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:19:55.350799 containerd[2539]: 2026-01-16 21:19:55.330 [INFO][5733] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.67/26] IPv6=[] ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" HandleID="k8s-pod-network.d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" Jan 16 21:19:55.351736 containerd[2539]: 2026-01-16 21:19:55.331 [INFO][5722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Namespace="calico-apiserver" Pod="calico-apiserver-64fcf4db84-9ph5q" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0", GenerateName:"calico-apiserver-64fcf4db84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ea28179-07a2-4a0c-9cc7-b4eadca6090c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fcf4db84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"", Pod:"calico-apiserver-64fcf4db84-9ph5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6819cd81150", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:55.351736 containerd[2539]: 2026-01-16 21:19:55.331 [INFO][5722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.67/32] ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Namespace="calico-apiserver" Pod="calico-apiserver-64fcf4db84-9ph5q" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" Jan 16 21:19:55.351736 containerd[2539]: 2026-01-16 21:19:55.331 [INFO][5722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6819cd81150 ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Namespace="calico-apiserver" Pod="calico-apiserver-64fcf4db84-9ph5q" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" Jan 16 21:19:55.351736 containerd[2539]: 2026-01-16 21:19:55.336 [INFO][5722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Namespace="calico-apiserver" Pod="calico-apiserver-64fcf4db84-9ph5q" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" Jan 16 21:19:55.351736 containerd[2539]: 2026-01-16 21:19:55.337 [INFO][5722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Namespace="calico-apiserver" Pod="calico-apiserver-64fcf4db84-9ph5q" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0", GenerateName:"calico-apiserver-64fcf4db84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ea28179-07a2-4a0c-9cc7-b4eadca6090c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fcf4db84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d", Pod:"calico-apiserver-64fcf4db84-9ph5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6819cd81150", MAC:"32:68:e0:f7:50:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:55.351736 containerd[2539]: 2026-01-16 21:19:55.347 [INFO][5722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" Namespace="calico-apiserver" Pod="calico-apiserver-64fcf4db84-9ph5q" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--64fcf4db84--9ph5q-eth0" Jan 16 21:19:55.360000 audit[5749]: NETFILTER_CFG table=filter:131 family=2 entries=60 op=nft_register_chain pid=5749 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:55.360000 audit[5749]: SYSCALL arch=c000003e syscall=46 success=yes exit=32248 a0=3 a1=7ffe63168bb0 a2=0 a3=7ffe63168b9c items=0 ppid=5417 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:55.360000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:55.386517 containerd[2539]: time="2026-01-16T21:19:55.386491942Z" level=info msg="connecting to shim d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d" address="unix:///run/containerd/s/5df67bda5a5c23b0f502d081674012b49c9c2372d2c9b01740b47eecaf4f7b00" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:55.407997 systemd[1]: Started cri-containerd-d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d.scope - libcontainer container d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d. Jan 16 21:19:55.414000 audit: BPF prog-id=244 op=LOAD Jan 16 21:19:55.414000 audit: BPF prog-id=245 op=LOAD Jan 16 21:19:55.414000 audit[5769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:55.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439333632633430356535386435376238343537613538653966303434 Jan 16 21:19:55.414000 audit: BPF prog-id=245 op=UNLOAD Jan 16 21:19:55.414000 audit[5769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:55.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439333632633430356535386435376238343537613538653966303434 Jan 16 21:19:55.414000 audit: BPF prog-id=246 op=LOAD Jan 16 21:19:55.414000 audit[5769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:55.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439333632633430356535386435376238343537613538653966303434 Jan 16 21:19:55.414000 audit: BPF prog-id=247 op=LOAD Jan 16 21:19:55.414000 audit[5769]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:55.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439333632633430356535386435376238343537613538653966303434 Jan 16 21:19:55.414000 audit: BPF prog-id=247 op=UNLOAD Jan 16 21:19:55.414000 audit[5769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:55.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439333632633430356535386435376238343537613538653966303434 Jan 16 21:19:55.414000 audit: BPF prog-id=246 op=UNLOAD Jan 16 21:19:55.414000 audit[5769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:55.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439333632633430356535386435376238343537613538653966303434 Jan 16 21:19:55.414000 audit: BPF prog-id=248 op=LOAD Jan 16 21:19:55.414000 audit[5769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:55.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439333632633430356535386435376238343537613538653966303434 Jan 16 21:19:55.442604 containerd[2539]: time="2026-01-16T21:19:55.442583435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fcf4db84-9ph5q,Uid:2ea28179-07a2-4a0c-9cc7-b4eadca6090c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d9362c405e58d57b8457a58e9f044551012da2884173374f48499470c4175c0d\"" Jan 16 21:19:55.443518 containerd[2539]: time="2026-01-16T21:19:55.443487380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:19:55.694322 containerd[2539]: time="2026-01-16T21:19:55.694250540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:19:55.696933 containerd[2539]: time="2026-01-16T21:19:55.696821784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:19:55.696933 containerd[2539]: time="2026-01-16T21:19:55.696878258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:19:55.697024 kubelet[3994]: E0116 21:19:55.696998 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:19:55.697230 kubelet[3994]: E0116 21:19:55.697030 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:19:55.697230 kubelet[3994]: E0116 21:19:55.697144 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dm8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fcf4db84-9ph5q_calico-apiserver(2ea28179-07a2-4a0c-9cc7-b4eadca6090c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:19:55.698445 kubelet[3994]: E0116 21:19:55.698421 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:19:56.398052 kubelet[3994]: E0116 21:19:56.398018 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:19:56.417000 audit[5799]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:56.419356 kernel: kauditd_printk_skb: 71 callbacks suppressed Jan 16 21:19:56.419402 kernel: audit: type=1325 audit(1768598396.417:694): table=filter:132 family=2 entries=14 op=nft_register_rule pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:56.417000 audit[5799]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc54426200 a2=0 a3=7ffc544261ec items=0 ppid=4151 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:56.426063 kernel: audit: type=1300 audit(1768598396.417:694): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc54426200 a2=0 a3=7ffc544261ec items=0 ppid=4151 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:56.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:56.429143 kernel: audit: type=1327 audit(1768598396.417:694): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:56.421000 audit[5799]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:56.431940 kernel: audit: type=1325 audit(1768598396.421:695): table=nat:133 family=2 entries=20 op=nft_register_rule pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:56.421000 audit[5799]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc54426200 a2=0 a3=7ffc544261ec items=0 ppid=4151 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:56.436756 kernel: audit: type=1300 audit(1768598396.421:695): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc54426200 a2=0 a3=7ffc544261ec items=0 ppid=4151 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:56.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:56.440607 kernel: audit: type=1327 audit(1768598396.421:695): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:56.820899 systemd-networkd[2320]: cali6819cd81150: Gained IPv6LL Jan 16 21:19:57.252950 containerd[2539]: time="2026-01-16T21:19:57.252877032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8pr8,Uid:2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:57.252950 containerd[2539]: time="2026-01-16T21:19:57.252899492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-8nr7d,Uid:8b6a9edf-78a2-4eb2-9228-633a08a758ae,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:19:57.359612 systemd-networkd[2320]: cali90e3b68b095: Link UP Jan 16 21:19:57.360155 systemd-networkd[2320]: cali90e3b68b095: Gained carrier Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.304 [INFO][5800] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0 goldmane-666569f655- calico-system 2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef 853 0 2026-01-16 21:19:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4580.0.0-p-452f1e7704 goldmane-666569f655-x8pr8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali90e3b68b095 [] [] }} ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Namespace="calico-system" Pod="goldmane-666569f655-x8pr8" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.304 [INFO][5800] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Namespace="calico-system" Pod="goldmane-666569f655-x8pr8" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.329 [INFO][5827] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" HandleID="k8s-pod-network.8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Workload="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.329 [INFO][5827] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" HandleID="k8s-pod-network.8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Workload="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580.0.0-p-452f1e7704", "pod":"goldmane-666569f655-x8pr8", "timestamp":"2026-01-16 21:19:57.329794153 +0000 UTC"}, Hostname:"ci-4580.0.0-p-452f1e7704", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.330 [INFO][5827] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.330 [INFO][5827] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.330 [INFO][5827] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-452f1e7704' Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.334 [INFO][5827] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.337 [INFO][5827] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.339 [INFO][5827] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.340 [INFO][5827] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.342 [INFO][5827] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.342 [INFO][5827] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.343 [INFO][5827] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.346 [INFO][5827] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.353 [INFO][5827] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.68/26] block=192.168.56.64/26 handle="k8s-pod-network.8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.353 [INFO][5827] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.68/26] handle="k8s-pod-network.8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.353 [INFO][5827] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:19:57.373302 containerd[2539]: 2026-01-16 21:19:57.353 [INFO][5827] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.68/26] IPv6=[] ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" HandleID="k8s-pod-network.8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Workload="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" Jan 16 21:19:57.373751 containerd[2539]: 2026-01-16 21:19:57.354 [INFO][5800] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Namespace="calico-system" Pod="goldmane-666569f655-x8pr8" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"", Pod:"goldmane-666569f655-x8pr8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90e3b68b095", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:57.373751 containerd[2539]: 2026-01-16 21:19:57.355 [INFO][5800] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.68/32] ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Namespace="calico-system" Pod="goldmane-666569f655-x8pr8" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" Jan 16 21:19:57.373751 containerd[2539]: 2026-01-16 21:19:57.355 [INFO][5800] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90e3b68b095 ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Namespace="calico-system" Pod="goldmane-666569f655-x8pr8" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" Jan 16 21:19:57.373751 containerd[2539]: 2026-01-16 21:19:57.361 [INFO][5800] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Namespace="calico-system" Pod="goldmane-666569f655-x8pr8" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" Jan 16 21:19:57.373751 containerd[2539]: 2026-01-16 21:19:57.362 [INFO][5800] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Namespace="calico-system" Pod="goldmane-666569f655-x8pr8" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc", Pod:"goldmane-666569f655-x8pr8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90e3b68b095", MAC:"ba:3a:42:30:5c:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:57.373751 containerd[2539]: 2026-01-16 21:19:57.371 [INFO][5800] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" Namespace="calico-system" Pod="goldmane-666569f655-x8pr8" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-goldmane--666569f655--x8pr8-eth0" Jan 16 21:19:57.383000 audit[5849]: NETFILTER_CFG table=filter:134 family=2 entries=48 op=nft_register_chain pid=5849 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:57.386875 kernel: audit: type=1325 audit(1768598397.383:696): table=filter:134 family=2 entries=48 op=nft_register_chain pid=5849 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:57.386926 kernel: audit: type=1300 audit(1768598397.383:696): arch=c000003e syscall=46 success=yes exit=26352 a0=3 a1=7fffbd238270 a2=0 a3=7fffbd23825c items=0 ppid=5417 pid=5849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.383000 audit[5849]: SYSCALL arch=c000003e syscall=46 success=yes exit=26352 a0=3 a1=7fffbd238270 a2=0 a3=7fffbd23825c items=0 ppid=5417 pid=5849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.392877 kernel: audit: type=1327 audit(1768598397.383:696): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:57.383000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:57.399566 kubelet[3994]: E0116 21:19:57.399387 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:19:57.422293 containerd[2539]: time="2026-01-16T21:19:57.422233907Z" level=info msg="connecting to shim 8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc" address="unix:///run/containerd/s/adb835e2a35f7f870dfd095180a5cd5002f64738d20a61b3aa8ddba9eea60ae0" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:57.439014 systemd[1]: Started cri-containerd-8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc.scope - libcontainer container 8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc. Jan 16 21:19:57.448000 audit: BPF prog-id=249 op=LOAD Jan 16 21:19:57.448000 audit: BPF prog-id=250 op=LOAD Jan 16 21:19:57.448000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5857 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323136383466613466623734633564393233356635393765623663 Jan 16 21:19:57.448000 audit: BPF prog-id=250 op=UNLOAD Jan 16 21:19:57.448000 audit[5870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5857 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323136383466613466623734633564393233356635393765623663 Jan 16 21:19:57.448000 audit: BPF prog-id=251 op=LOAD Jan 16 21:19:57.448000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5857 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323136383466613466623734633564393233356635393765623663 Jan 16 21:19:57.448000 audit: BPF prog-id=252 op=LOAD Jan 16 21:19:57.448000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5857 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323136383466613466623734633564393233356635393765623663 Jan 16 21:19:57.449000 audit: BPF prog-id=252 op=UNLOAD Jan 16 21:19:57.449000 audit[5870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5857 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323136383466613466623734633564393233356635393765623663 Jan 16 21:19:57.449000 audit: BPF prog-id=251 op=UNLOAD Jan 16 21:19:57.449000 audit[5870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5857 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.451853 kernel: audit: type=1334 audit(1768598397.448:697): prog-id=249 op=LOAD Jan 16 21:19:57.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323136383466613466623734633564393233356635393765623663 Jan 16 21:19:57.449000 audit: BPF prog-id=253 op=LOAD Jan 16 21:19:57.449000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5857 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323136383466613466623734633564393233356635393765623663 Jan 16 21:19:57.476172 systemd-networkd[2320]: caliedce231526e: Link UP Jan 16 21:19:57.476919 systemd-networkd[2320]: caliedce231526e: Gained carrier Jan 16 21:19:57.498448 containerd[2539]: time="2026-01-16T21:19:57.498426610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8pr8,Uid:2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"8521684fa4fb74c5d9235f597eb6c7e2cbf30068724942b5b989ee6449a972dc\"" Jan 16 21:19:57.500243 containerd[2539]: time="2026-01-16T21:19:57.500213404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.309 [INFO][5804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0 calico-apiserver-d8d4c957- calico-apiserver 8b6a9edf-78a2-4eb2-9228-633a08a758ae 854 0 2026-01-16 21:19:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8d4c957 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4580.0.0-p-452f1e7704 calico-apiserver-d8d4c957-8nr7d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliedce231526e [] [] }} ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-8nr7d" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.309 [INFO][5804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-8nr7d" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.332 [INFO][5832] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" HandleID="k8s-pod-network.a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.333 [INFO][5832] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" HandleID="k8s-pod-network.a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4580.0.0-p-452f1e7704", "pod":"calico-apiserver-d8d4c957-8nr7d", "timestamp":"2026-01-16 21:19:57.332418114 +0000 UTC"}, Hostname:"ci-4580.0.0-p-452f1e7704", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.333 [INFO][5832] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.353 [INFO][5832] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.353 [INFO][5832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-452f1e7704' Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.437 [INFO][5832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.441 [INFO][5832] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.444 [INFO][5832] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.445 [INFO][5832] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.454 [INFO][5832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.455 [INFO][5832] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.456 [INFO][5832] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9 Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.464 [INFO][5832] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.470 [INFO][5832] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.69/26] block=192.168.56.64/26 handle="k8s-pod-network.a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.470 [INFO][5832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.69/26] handle="k8s-pod-network.a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.470 [INFO][5832] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:19:57.500728 containerd[2539]: 2026-01-16 21:19:57.470 [INFO][5832] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.69/26] IPv6=[] ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" HandleID="k8s-pod-network.a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" Jan 16 21:19:57.501595 containerd[2539]: 2026-01-16 21:19:57.471 [INFO][5804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-8nr7d" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0", GenerateName:"calico-apiserver-d8d4c957-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b6a9edf-78a2-4eb2-9228-633a08a758ae", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d4c957", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"", Pod:"calico-apiserver-d8d4c957-8nr7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedce231526e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:57.501595 containerd[2539]: 2026-01-16 21:19:57.471 [INFO][5804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.69/32] ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-8nr7d" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" Jan 16 21:19:57.501595 containerd[2539]: 2026-01-16 21:19:57.471 [INFO][5804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedce231526e ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-8nr7d" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" Jan 16 21:19:57.501595 containerd[2539]: 2026-01-16 21:19:57.482 [INFO][5804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-8nr7d" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" Jan 16 21:19:57.501595 containerd[2539]: 2026-01-16 21:19:57.485 [INFO][5804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-8nr7d" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0", GenerateName:"calico-apiserver-d8d4c957-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b6a9edf-78a2-4eb2-9228-633a08a758ae", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d4c957", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9", Pod:"calico-apiserver-d8d4c957-8nr7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedce231526e", MAC:"7a:c3:1b:75:e3:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:57.501595 containerd[2539]: 2026-01-16 21:19:57.496 [INFO][5804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-8nr7d" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--8nr7d-eth0" Jan 16 21:19:57.517000 audit[5905]: NETFILTER_CFG table=filter:135 family=2 entries=45 op=nft_register_chain pid=5905 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:57.517000 audit[5905]: SYSCALL arch=c000003e syscall=46 success=yes exit=24248 a0=3 a1=7fff3871ea00 a2=0 a3=7fff3871e9ec items=0 ppid=5417 pid=5905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.517000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:57.540688 containerd[2539]: time="2026-01-16T21:19:57.540630247Z" level=info msg="connecting to shim a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9" address="unix:///run/containerd/s/5bb5a09e9b5c152801d0dff674f55db847d8bcdf5e66d11d254f779a40ce044e" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:57.559992 systemd[1]: Started cri-containerd-a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9.scope - libcontainer container a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9. Jan 16 21:19:57.567000 audit: BPF prog-id=254 op=LOAD Jan 16 21:19:57.567000 audit: BPF prog-id=255 op=LOAD Jan 16 21:19:57.567000 audit[5926]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5914 pid=5926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139613435323932316162356239613631373566326336356238363739 Jan 16 21:19:57.567000 audit: BPF prog-id=255 op=UNLOAD Jan 16 21:19:57.567000 audit[5926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5914 pid=5926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139613435323932316162356239613631373566326336356238363739 Jan 16 21:19:57.567000 audit: BPF prog-id=256 op=LOAD Jan 16 21:19:57.567000 audit[5926]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5914 pid=5926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139613435323932316162356239613631373566326336356238363739 Jan 16 21:19:57.567000 audit: BPF prog-id=257 op=LOAD Jan 16 21:19:57.567000 audit[5926]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5914 pid=5926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139613435323932316162356239613631373566326336356238363739 Jan 16 21:19:57.567000 audit: BPF prog-id=257 op=UNLOAD Jan 16 21:19:57.567000 audit[5926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5914 pid=5926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139613435323932316162356239613631373566326336356238363739 Jan 16 21:19:57.567000 audit: BPF prog-id=256 op=UNLOAD Jan 16 21:19:57.567000 audit[5926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5914 pid=5926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139613435323932316162356239613631373566326336356238363739 Jan 16 21:19:57.567000 audit: BPF prog-id=258 op=LOAD Jan 16 21:19:57.567000 audit[5926]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5914 pid=5926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:57.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139613435323932316162356239613631373566326336356238363739 Jan 16 21:19:57.596414 containerd[2539]: time="2026-01-16T21:19:57.596393103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-8nr7d,Uid:8b6a9edf-78a2-4eb2-9228-633a08a758ae,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a9a452921ab5b9a6175f2c65b8679e05f4e796308d02c77c7ed2e1daf21d60c9\"" Jan 16 21:19:57.770031 containerd[2539]: time="2026-01-16T21:19:57.769931543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:19:57.773691 containerd[2539]: time="2026-01-16T21:19:57.773172654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:19:57.773691 containerd[2539]: time="2026-01-16T21:19:57.773233347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:19:57.774020 kubelet[3994]: E0116 21:19:57.773320 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:19:57.774020 kubelet[3994]: E0116 21:19:57.773358 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:19:57.774020 kubelet[3994]: E0116 21:19:57.773535 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbbx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x8pr8_calico-system(2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:19:57.774354 containerd[2539]: time="2026-01-16T21:19:57.774094128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:19:57.775151 kubelet[3994]: E0116 21:19:57.775126 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:19:58.039976 containerd[2539]: time="2026-01-16T21:19:58.039912277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:19:58.042542 containerd[2539]: time="2026-01-16T21:19:58.042501810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:19:58.042611 containerd[2539]: time="2026-01-16T21:19:58.042508384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:19:58.042683 kubelet[3994]: E0116 21:19:58.042642 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:19:58.042712 kubelet[3994]: E0116 21:19:58.042693 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:19:58.042873 kubelet[3994]: E0116 21:19:58.042802 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m69pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d4c957-8nr7d_calico-apiserver(8b6a9edf-78a2-4eb2-9228-633a08a758ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:19:58.043984 kubelet[3994]: E0116 21:19:58.043961 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:19:58.253087 containerd[2539]: time="2026-01-16T21:19:58.252982970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6qvx,Uid:1afe31de-e2b2-4f78-ad0e-413ab9447575,Namespace:kube-system,Attempt:0,}" Jan 16 21:19:58.331823 systemd-networkd[2320]: cali38681dad8c6: Link UP Jan 16 21:19:58.333411 systemd-networkd[2320]: cali38681dad8c6: Gained carrier Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.288 [INFO][5953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0 coredns-668d6bf9bc- kube-system 1afe31de-e2b2-4f78-ad0e-413ab9447575 844 0 2026-01-16 21:18:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4580.0.0-p-452f1e7704 coredns-668d6bf9bc-s6qvx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali38681dad8c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6qvx" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.288 [INFO][5953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6qvx" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.305 [INFO][5965] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" HandleID="k8s-pod-network.62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Workload="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.306 [INFO][5965] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" HandleID="k8s-pod-network.62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Workload="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4580.0.0-p-452f1e7704", "pod":"coredns-668d6bf9bc-s6qvx", "timestamp":"2026-01-16 21:19:58.305908453 +0000 UTC"}, Hostname:"ci-4580.0.0-p-452f1e7704", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.306 [INFO][5965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.306 [INFO][5965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.306 [INFO][5965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-452f1e7704' Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.309 [INFO][5965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.312 [INFO][5965] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.314 [INFO][5965] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.316 [INFO][5965] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.317 [INFO][5965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.317 [INFO][5965] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.319 [INFO][5965] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329 Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.322 [INFO][5965] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.329 [INFO][5965] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.70/26] block=192.168.56.64/26 handle="k8s-pod-network.62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.329 [INFO][5965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.70/26] handle="k8s-pod-network.62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.329 [INFO][5965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:19:58.346785 containerd[2539]: 2026-01-16 21:19:58.329 [INFO][5965] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.70/26] IPv6=[] ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" HandleID="k8s-pod-network.62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Workload="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" Jan 16 21:19:58.348156 containerd[2539]: 2026-01-16 21:19:58.330 [INFO][5953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6qvx" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1afe31de-e2b2-4f78-ad0e-413ab9447575", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"", Pod:"coredns-668d6bf9bc-s6qvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38681dad8c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:58.348156 containerd[2539]: 2026-01-16 21:19:58.330 [INFO][5953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.70/32] ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6qvx" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" Jan 16 21:19:58.348156 containerd[2539]: 2026-01-16 21:19:58.330 [INFO][5953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38681dad8c6 ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6qvx" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" Jan 16 21:19:58.348156 containerd[2539]: 2026-01-16 21:19:58.334 [INFO][5953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6qvx" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" Jan 16 21:19:58.348156 containerd[2539]: 2026-01-16 21:19:58.335 [INFO][5953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6qvx" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1afe31de-e2b2-4f78-ad0e-413ab9447575", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329", Pod:"coredns-668d6bf9bc-s6qvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38681dad8c6", MAC:"22:17:ab:84:22:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:58.348156 containerd[2539]: 2026-01-16 21:19:58.344 [INFO][5953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6qvx" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-coredns--668d6bf9bc--s6qvx-eth0" Jan 16 21:19:58.356000 audit[5981]: NETFILTER_CFG table=filter:136 family=2 entries=44 op=nft_register_chain pid=5981 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:58.356000 audit[5981]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffeee958ca0 a2=0 a3=7ffeee958c8c items=0 ppid=5417 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.356000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:58.379313 containerd[2539]: time="2026-01-16T21:19:58.379286662Z" level=info msg="connecting to shim 62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329" address="unix:///run/containerd/s/be93c5e5ccbc0e9fbf468785c4aec4e43e9d2c0a4455a7a097524502e516ec11" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:58.396998 systemd[1]: Started cri-containerd-62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329.scope - libcontainer container 62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329. Jan 16 21:19:58.402292 kubelet[3994]: E0116 21:19:58.402257 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:19:58.404503 kubelet[3994]: E0116 21:19:58.404475 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:19:58.406000 audit: BPF prog-id=259 op=LOAD Jan 16 21:19:58.407000 audit: BPF prog-id=260 op=LOAD Jan 16 21:19:58.407000 audit[6000]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=5989 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343436613036663762376661303362323666666637313239303766 Jan 16 21:19:58.407000 audit: BPF prog-id=260 op=UNLOAD Jan 16 21:19:58.407000 audit[6000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5989 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343436613036663762376661303362323666666637313239303766 Jan 16 21:19:58.407000 audit: BPF prog-id=261 op=LOAD Jan 16 21:19:58.407000 audit[6000]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=5989 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343436613036663762376661303362323666666637313239303766 Jan 16 21:19:58.407000 audit: BPF prog-id=262 op=LOAD Jan 16 21:19:58.407000 audit[6000]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=5989 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343436613036663762376661303362323666666637313239303766 Jan 16 21:19:58.407000 audit: BPF prog-id=262 op=UNLOAD Jan 16 21:19:58.407000 audit[6000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5989 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343436613036663762376661303362323666666637313239303766 Jan 16 21:19:58.407000 audit: BPF prog-id=261 op=UNLOAD Jan 16 21:19:58.407000 audit[6000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5989 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343436613036663762376661303362323666666637313239303766 Jan 16 21:19:58.407000 audit: BPF prog-id=263 op=LOAD Jan 16 21:19:58.407000 audit[6000]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=5989 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343436613036663762376661303362323666666637313239303766 Jan 16 21:19:58.445740 containerd[2539]: time="2026-01-16T21:19:58.445641538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6qvx,Uid:1afe31de-e2b2-4f78-ad0e-413ab9447575,Namespace:kube-system,Attempt:0,} returns sandbox id \"62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329\"" Jan 16 21:19:58.448000 audit[6026]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=6026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:58.448000 audit[6026]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe978b9e90 a2=0 a3=7ffe978b9e7c items=0 ppid=4151 pid=6026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:58.450519 containerd[2539]: time="2026-01-16T21:19:58.449341451Z" level=info msg="CreateContainer within sandbox \"62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 21:19:58.454000 audit[6026]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=6026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:58.454000 audit[6026]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe978b9e90 a2=0 a3=7ffe978b9e7c items=0 ppid=4151 pid=6026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:58.463000 audit[6028]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=6028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:58.463000 audit[6028]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe822fd430 a2=0 a3=7ffe822fd41c items=0 ppid=4151 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:58.466000 audit[6028]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=6028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:58.466000 audit[6028]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe822fd430 a2=0 a3=7ffe822fd41c items=0 ppid=4151 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:58.472181 containerd[2539]: time="2026-01-16T21:19:58.470916487Z" level=info msg="Container 6dade6a6ed5ee7dc1a32df7d14d4ae488a7d1cf3ead49943fb26a2ba0faacdd1: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:19:58.485028 containerd[2539]: time="2026-01-16T21:19:58.485007216Z" level=info msg="CreateContainer within sandbox \"62446a06f7b7fa03b26fff712907fde1dfdd80aca7a3130d3832ce675fcbb329\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6dade6a6ed5ee7dc1a32df7d14d4ae488a7d1cf3ead49943fb26a2ba0faacdd1\"" Jan 16 21:19:58.485381 containerd[2539]: time="2026-01-16T21:19:58.485366873Z" level=info msg="StartContainer for \"6dade6a6ed5ee7dc1a32df7d14d4ae488a7d1cf3ead49943fb26a2ba0faacdd1\"" Jan 16 21:19:58.486275 containerd[2539]: time="2026-01-16T21:19:58.486234209Z" level=info msg="connecting to shim 6dade6a6ed5ee7dc1a32df7d14d4ae488a7d1cf3ead49943fb26a2ba0faacdd1" address="unix:///run/containerd/s/be93c5e5ccbc0e9fbf468785c4aec4e43e9d2c0a4455a7a097524502e516ec11" protocol=ttrpc version=3 Jan 16 21:19:58.499206 systemd[1]: Started cri-containerd-6dade6a6ed5ee7dc1a32df7d14d4ae488a7d1cf3ead49943fb26a2ba0faacdd1.scope - libcontainer container 6dade6a6ed5ee7dc1a32df7d14d4ae488a7d1cf3ead49943fb26a2ba0faacdd1. Jan 16 21:19:58.505000 audit: BPF prog-id=264 op=LOAD Jan 16 21:19:58.506000 audit: BPF prog-id=265 op=LOAD Jan 16 21:19:58.506000 audit[6029]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5989 pid=6029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664616465366136656435656537646331613332646637643134643461 Jan 16 21:19:58.506000 audit: BPF prog-id=265 op=UNLOAD Jan 16 21:19:58.506000 audit[6029]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5989 pid=6029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664616465366136656435656537646331613332646637643134643461 Jan 16 21:19:58.506000 audit: BPF prog-id=266 op=LOAD Jan 16 21:19:58.506000 audit[6029]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5989 pid=6029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664616465366136656435656537646331613332646637643134643461 Jan 16 21:19:58.506000 audit: BPF prog-id=267 op=LOAD Jan 16 21:19:58.506000 audit[6029]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5989 pid=6029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664616465366136656435656537646331613332646637643134643461 Jan 16 21:19:58.506000 audit: BPF prog-id=267 op=UNLOAD Jan 16 21:19:58.506000 audit[6029]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5989 pid=6029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664616465366136656435656537646331613332646637643134643461 Jan 16 21:19:58.506000 audit: BPF prog-id=266 op=UNLOAD Jan 16 21:19:58.506000 audit[6029]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5989 pid=6029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664616465366136656435656537646331613332646637643134643461 Jan 16 21:19:58.506000 audit: BPF prog-id=268 op=LOAD Jan 16 21:19:58.506000 audit[6029]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5989 pid=6029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:58.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664616465366136656435656537646331613332646637643134643461 Jan 16 21:19:58.522560 containerd[2539]: time="2026-01-16T21:19:58.522540240Z" level=info msg="StartContainer for \"6dade6a6ed5ee7dc1a32df7d14d4ae488a7d1cf3ead49943fb26a2ba0faacdd1\" returns successfully" Jan 16 21:19:59.252333 containerd[2539]: time="2026-01-16T21:19:59.252296565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-cnh4g,Uid:be8f7779-a5b2-41ff-909f-1387d7e3242a,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:19:59.252430 containerd[2539]: time="2026-01-16T21:19:59.252296649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pf8t,Uid:ad5a70af-916a-4e95-9866-1f1c8f4329d0,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:59.252920 systemd-networkd[2320]: cali90e3b68b095: Gained IPv6LL Jan 16 21:19:59.253110 systemd-networkd[2320]: caliedce231526e: Gained IPv6LL Jan 16 21:19:59.360088 systemd-networkd[2320]: calif7bc255a088: Link UP Jan 16 21:19:59.361131 systemd-networkd[2320]: calif7bc255a088: Gained carrier Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.304 [INFO][6060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0 calico-apiserver-d8d4c957- calico-apiserver be8f7779-a5b2-41ff-909f-1387d7e3242a 852 0 2026-01-16 21:19:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8d4c957 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4580.0.0-p-452f1e7704 calico-apiserver-d8d4c957-cnh4g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7bc255a088 [] [] }} ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-cnh4g" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.305 [INFO][6060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-cnh4g" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.327 [INFO][6085] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" HandleID="k8s-pod-network.917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.327 [INFO][6085] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" HandleID="k8s-pod-network.917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4580.0.0-p-452f1e7704", "pod":"calico-apiserver-d8d4c957-cnh4g", "timestamp":"2026-01-16 21:19:59.327681242 +0000 UTC"}, Hostname:"ci-4580.0.0-p-452f1e7704", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.327 [INFO][6085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.327 [INFO][6085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.327 [INFO][6085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-452f1e7704' Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.332 [INFO][6085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.336 [INFO][6085] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.339 [INFO][6085] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.340 [INFO][6085] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.342 [INFO][6085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.342 [INFO][6085] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.343 [INFO][6085] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5 Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.346 [INFO][6085] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.353 [INFO][6085] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.71/26] block=192.168.56.64/26 handle="k8s-pod-network.917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.353 [INFO][6085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.71/26] handle="k8s-pod-network.917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.353 [INFO][6085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:19:59.373793 containerd[2539]: 2026-01-16 21:19:59.353 [INFO][6085] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.71/26] IPv6=[] ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" HandleID="k8s-pod-network.917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" Jan 16 21:19:59.374377 containerd[2539]: 2026-01-16 21:19:59.355 [INFO][6060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-cnh4g" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0", GenerateName:"calico-apiserver-d8d4c957-", Namespace:"calico-apiserver", SelfLink:"", UID:"be8f7779-a5b2-41ff-909f-1387d7e3242a", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d4c957", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"", Pod:"calico-apiserver-d8d4c957-cnh4g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7bc255a088", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:59.374377 containerd[2539]: 2026-01-16 21:19:59.355 [INFO][6060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.71/32] ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-cnh4g" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" Jan 16 21:19:59.374377 containerd[2539]: 2026-01-16 21:19:59.355 [INFO][6060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7bc255a088 ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-cnh4g" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" Jan 16 21:19:59.374377 containerd[2539]: 2026-01-16 21:19:59.360 [INFO][6060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-cnh4g" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" Jan 16 21:19:59.374377 containerd[2539]: 2026-01-16 21:19:59.361 [INFO][6060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-cnh4g" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0", GenerateName:"calico-apiserver-d8d4c957-", Namespace:"calico-apiserver", SelfLink:"", UID:"be8f7779-a5b2-41ff-909f-1387d7e3242a", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d4c957", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5", Pod:"calico-apiserver-d8d4c957-cnh4g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7bc255a088", MAC:"ce:61:c2:2b:8e:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:59.374377 containerd[2539]: 2026-01-16 21:19:59.371 [INFO][6060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" Namespace="calico-apiserver" Pod="calico-apiserver-d8d4c957-cnh4g" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--apiserver--d8d4c957--cnh4g-eth0" Jan 16 21:19:59.382000 audit[6108]: NETFILTER_CFG table=filter:141 family=2 entries=53 op=nft_register_chain pid=6108 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:59.382000 audit[6108]: SYSCALL arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7ffd819e0c70 a2=0 a3=7ffd819e0c5c items=0 ppid=5417 pid=6108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.382000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:59.409283 kubelet[3994]: E0116 21:19:59.409029 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:19:59.409854 kubelet[3994]: E0116 21:19:59.409802 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:19:59.410262 containerd[2539]: time="2026-01-16T21:19:59.410196640Z" level=info msg="connecting to shim 917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5" address="unix:///run/containerd/s/56746c2c9a37fd9a90b06953dc4de467e5f86214fbc5e41005bba76df6c1a693" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:59.439086 systemd[1]: Started cri-containerd-917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5.scope - libcontainer container 917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5. Jan 16 21:19:59.454000 audit: BPF prog-id=269 op=LOAD Jan 16 21:19:59.456000 audit: BPF prog-id=270 op=LOAD Jan 16 21:19:59.456000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=6117 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376137653561343730323262383737613232623236643131646161 Jan 16 21:19:59.456000 audit: BPF prog-id=270 op=UNLOAD Jan 16 21:19:59.456000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6117 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376137653561343730323262383737613232623236643131646161 Jan 16 21:19:59.456000 audit: BPF prog-id=271 op=LOAD Jan 16 21:19:59.456000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=6117 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376137653561343730323262383737613232623236643131646161 Jan 16 21:19:59.456000 audit: BPF prog-id=272 op=LOAD Jan 16 21:19:59.456000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=6117 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376137653561343730323262383737613232623236643131646161 Jan 16 21:19:59.456000 audit: BPF prog-id=272 op=UNLOAD Jan 16 21:19:59.456000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6117 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376137653561343730323262383737613232623236643131646161 Jan 16 21:19:59.456000 audit: BPF prog-id=271 op=UNLOAD Jan 16 21:19:59.456000 audit[6128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6117 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376137653561343730323262383737613232623236643131646161 Jan 16 21:19:59.456000 audit: BPF prog-id=273 op=LOAD Jan 16 21:19:59.456000 audit[6128]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=6117 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931376137653561343730323262383737613232623236643131646161 Jan 16 21:19:59.505471 systemd-networkd[2320]: cali3e5f2701665: Link UP Jan 16 21:19:59.507259 systemd-networkd[2320]: cali3e5f2701665: Gained carrier Jan 16 21:19:59.522100 kubelet[3994]: I0116 21:19:59.521735 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s6qvx" podStartSLOduration=73.521723343 podStartE2EDuration="1m13.521723343s" podCreationTimestamp="2026-01-16 21:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:19:59.464424328 +0000 UTC m=+73.311911243" watchObservedRunningTime="2026-01-16 21:19:59.521723343 +0000 UTC m=+73.369210260" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.315 [INFO][6068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0 csi-node-driver- calico-system ad5a70af-916a-4e95-9866-1f1c8f4329d0 713 0 2026-01-16 21:19:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4580.0.0-p-452f1e7704 csi-node-driver-7pf8t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3e5f2701665 [] [] }} ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Namespace="calico-system" Pod="csi-node-driver-7pf8t" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.315 [INFO][6068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Namespace="calico-system" Pod="csi-node-driver-7pf8t" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.337 [INFO][6091] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" HandleID="k8s-pod-network.77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Workload="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.337 [INFO][6091] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" HandleID="k8s-pod-network.77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Workload="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580.0.0-p-452f1e7704", "pod":"csi-node-driver-7pf8t", "timestamp":"2026-01-16 21:19:59.337295568 +0000 UTC"}, Hostname:"ci-4580.0.0-p-452f1e7704", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.337 [INFO][6091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.353 [INFO][6091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.353 [INFO][6091] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-452f1e7704' Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.433 [INFO][6091] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.443 [INFO][6091] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.463 [INFO][6091] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.472 [INFO][6091] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.475 [INFO][6091] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.475 [INFO][6091] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.478 [INFO][6091] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.482 [INFO][6091] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.494 [INFO][6091] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.72/26] block=192.168.56.64/26 handle="k8s-pod-network.77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.494 [INFO][6091] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.72/26] handle="k8s-pod-network.77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.494 [INFO][6091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:19:59.525117 containerd[2539]: 2026-01-16 21:19:59.494 [INFO][6091] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.72/26] IPv6=[] ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" HandleID="k8s-pod-network.77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Workload="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" Jan 16 21:19:59.525556 containerd[2539]: 2026-01-16 21:19:59.496 [INFO][6068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Namespace="calico-system" Pod="csi-node-driver-7pf8t" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad5a70af-916a-4e95-9866-1f1c8f4329d0", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"", Pod:"csi-node-driver-7pf8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e5f2701665", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:59.525556 containerd[2539]: 2026-01-16 21:19:59.496 [INFO][6068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.72/32] ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Namespace="calico-system" Pod="csi-node-driver-7pf8t" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" Jan 16 21:19:59.525556 containerd[2539]: 2026-01-16 21:19:59.496 [INFO][6068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e5f2701665 ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Namespace="calico-system" Pod="csi-node-driver-7pf8t" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" Jan 16 21:19:59.525556 containerd[2539]: 2026-01-16 21:19:59.507 [INFO][6068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Namespace="calico-system" Pod="csi-node-driver-7pf8t" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" Jan 16 21:19:59.525556 containerd[2539]: 2026-01-16 21:19:59.511 [INFO][6068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Namespace="calico-system" Pod="csi-node-driver-7pf8t" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad5a70af-916a-4e95-9866-1f1c8f4329d0", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf", Pod:"csi-node-driver-7pf8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e5f2701665", MAC:"b6:bb:18:cd:0e:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:19:59.525556 containerd[2539]: 2026-01-16 21:19:59.522 [INFO][6068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" Namespace="calico-system" Pod="csi-node-driver-7pf8t" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-csi--node--driver--7pf8t-eth0" Jan 16 21:19:59.563049 containerd[2539]: time="2026-01-16T21:19:59.562110664Z" level=info msg="connecting to shim 77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf" address="unix:///run/containerd/s/441964281530d79f72f248c33d8ead4f5c9e8758da1ab10efd8b7b80f0dabcee" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:59.582000 audit[6175]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=6175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:59.582000 audit[6175]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffea3936190 a2=0 a3=7ffea393617c items=0 ppid=4151 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:59.596986 systemd[1]: Started cri-containerd-77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf.scope - libcontainer container 77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf. Jan 16 21:19:59.600000 audit[6192]: NETFILTER_CFG table=filter:143 family=2 entries=62 op=nft_register_chain pid=6192 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:19:59.600000 audit[6192]: SYSCALL arch=c000003e syscall=46 success=yes exit=28352 a0=3 a1=7ffec2ba1410 a2=0 a3=7ffec2ba13fc items=0 ppid=5417 pid=6192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.600000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:19:59.607000 audit: BPF prog-id=274 op=LOAD Jan 16 21:19:59.608000 audit: BPF prog-id=275 op=LOAD Jan 16 21:19:59.608000 audit[6174]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=6164 pid=6174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737616435303466643634373066636637366264323365386661313665 Jan 16 21:19:59.608000 audit: BPF prog-id=275 op=UNLOAD Jan 16 21:19:59.608000 audit[6174]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6164 pid=6174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737616435303466643634373066636637366264323365386661313665 Jan 16 21:19:59.608000 audit: BPF prog-id=276 op=LOAD Jan 16 21:19:59.608000 audit[6174]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=6164 pid=6174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737616435303466643634373066636637366264323365386661313665 Jan 16 21:19:59.608000 audit: BPF prog-id=277 op=LOAD Jan 16 21:19:59.608000 audit[6174]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=6164 pid=6174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737616435303466643634373066636637366264323365386661313665 Jan 16 21:19:59.609000 audit: BPF prog-id=277 op=UNLOAD Jan 16 21:19:59.609000 audit[6174]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6164 pid=6174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737616435303466643634373066636637366264323365386661313665 Jan 16 21:19:59.609000 audit: BPF prog-id=276 op=UNLOAD Jan 16 21:19:59.609000 audit[6174]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6164 pid=6174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737616435303466643634373066636637366264323365386661313665 Jan 16 21:19:59.609000 audit: BPF prog-id=278 op=LOAD Jan 16 21:19:59.609000 audit[6174]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=6164 pid=6174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737616435303466643634373066636637366264323365386661313665 Jan 16 21:19:59.626000 audit[6175]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=6175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:59.626000 audit[6175]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffea3936190 a2=0 a3=7ffea393617c items=0 ppid=4151 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:59.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:59.634361 containerd[2539]: time="2026-01-16T21:19:59.633992253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d4c957-cnh4g,Uid:be8f7779-a5b2-41ff-909f-1387d7e3242a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"917a7e5a47022b877a22b26d11daab292e1f0171c3a1b133f9f0404cbd7984d5\"" Jan 16 21:19:59.637956 containerd[2539]: time="2026-01-16T21:19:59.637789110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:19:59.644906 containerd[2539]: time="2026-01-16T21:19:59.644886998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pf8t,Uid:ad5a70af-916a-4e95-9866-1f1c8f4329d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"77ad504fd6470fcf76bd23e8fa16e44afaefcd9da3e4f635905aa9a717f97ecf\"" Jan 16 21:19:59.828909 systemd-networkd[2320]: cali38681dad8c6: Gained IPv6LL Jan 16 21:19:59.875352 containerd[2539]: time="2026-01-16T21:19:59.875316805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:19:59.877920 containerd[2539]: time="2026-01-16T21:19:59.877886477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:19:59.877967 containerd[2539]: time="2026-01-16T21:19:59.877942878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:19:59.878038 kubelet[3994]: E0116 21:19:59.878015 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:19:59.878067 kubelet[3994]: E0116 21:19:59.878046 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:19:59.878249 kubelet[3994]: E0116 21:19:59.878208 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntzvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d4c957-cnh4g_calico-apiserver(be8f7779-a5b2-41ff-909f-1387d7e3242a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:19:59.878567 containerd[2539]: time="2026-01-16T21:19:59.878550519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:19:59.879845 kubelet[3994]: E0116 21:19:59.879814 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:20:00.115431 containerd[2539]: time="2026-01-16T21:20:00.115377404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:00.119194 containerd[2539]: time="2026-01-16T21:20:00.119163845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:20:00.119242 containerd[2539]: time="2026-01-16T21:20:00.119215298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:00.119335 kubelet[3994]: E0116 21:20:00.119302 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:20:00.119389 kubelet[3994]: E0116 21:20:00.119340 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:20:00.119483 kubelet[3994]: E0116 21:20:00.119440 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjs4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:00.121283 containerd[2539]: time="2026-01-16T21:20:00.121201554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:20:00.254566 containerd[2539]: time="2026-01-16T21:20:00.254546269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d7b95fc-88tnz,Uid:010e89e0-6574-4783-aa50-97e803ab00dc,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:00.340219 systemd-networkd[2320]: cali84ec5d745ce: Link UP Jan 16 21:20:00.340327 systemd-networkd[2320]: cali84ec5d745ce: Gained carrier Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.287 [INFO][6220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0 calico-kube-controllers-6d6d7b95fc- calico-system 010e89e0-6574-4783-aa50-97e803ab00dc 851 0 2026-01-16 21:19:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d6d7b95fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4580.0.0-p-452f1e7704 calico-kube-controllers-6d6d7b95fc-88tnz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali84ec5d745ce [] [] }} ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Namespace="calico-system" Pod="calico-kube-controllers-6d6d7b95fc-88tnz" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.287 [INFO][6220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Namespace="calico-system" Pod="calico-kube-controllers-6d6d7b95fc-88tnz" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.306 [INFO][6232] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" HandleID="k8s-pod-network.a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.306 [INFO][6232] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" HandleID="k8s-pod-network.a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580.0.0-p-452f1e7704", "pod":"calico-kube-controllers-6d6d7b95fc-88tnz", "timestamp":"2026-01-16 21:20:00.306250277 +0000 UTC"}, Hostname:"ci-4580.0.0-p-452f1e7704", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.306 [INFO][6232] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.306 [INFO][6232] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.306 [INFO][6232] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580.0.0-p-452f1e7704' Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.309 [INFO][6232] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.312 [INFO][6232] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580.0.0-p-452f1e7704" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.315 [INFO][6232] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.316 [INFO][6232] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.318 [INFO][6232] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4580.0.0-p-452f1e7704" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.318 [INFO][6232] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.319 [INFO][6232] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.324 [INFO][6232] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.335 [INFO][6232] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.73/26] block=192.168.56.64/26 handle="k8s-pod-network.a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.335 [INFO][6232] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.73/26] handle="k8s-pod-network.a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" host="ci-4580.0.0-p-452f1e7704" Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.335 [INFO][6232] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:20:00.355411 containerd[2539]: 2026-01-16 21:20:00.335 [INFO][6232] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.73/26] IPv6=[] ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" HandleID="k8s-pod-network.a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Workload="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" Jan 16 21:20:00.356289 containerd[2539]: 2026-01-16 21:20:00.337 [INFO][6220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Namespace="calico-system" Pod="calico-kube-controllers-6d6d7b95fc-88tnz" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0", GenerateName:"calico-kube-controllers-6d6d7b95fc-", Namespace:"calico-system", SelfLink:"", UID:"010e89e0-6574-4783-aa50-97e803ab00dc", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6d7b95fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"", Pod:"calico-kube-controllers-6d6d7b95fc-88tnz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84ec5d745ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:20:00.356289 containerd[2539]: 2026-01-16 21:20:00.337 [INFO][6220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.73/32] ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Namespace="calico-system" Pod="calico-kube-controllers-6d6d7b95fc-88tnz" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" Jan 16 21:20:00.356289 containerd[2539]: 2026-01-16 21:20:00.338 [INFO][6220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84ec5d745ce ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Namespace="calico-system" Pod="calico-kube-controllers-6d6d7b95fc-88tnz" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" Jan 16 21:20:00.356289 containerd[2539]: 2026-01-16 21:20:00.339 [INFO][6220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Namespace="calico-system" Pod="calico-kube-controllers-6d6d7b95fc-88tnz" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" Jan 16 21:20:00.356289 containerd[2539]: 2026-01-16 21:20:00.340 [INFO][6220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Namespace="calico-system" Pod="calico-kube-controllers-6d6d7b95fc-88tnz" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0", GenerateName:"calico-kube-controllers-6d6d7b95fc-", Namespace:"calico-system", SelfLink:"", UID:"010e89e0-6574-4783-aa50-97e803ab00dc", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6d7b95fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580.0.0-p-452f1e7704", ContainerID:"a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec", Pod:"calico-kube-controllers-6d6d7b95fc-88tnz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84ec5d745ce", MAC:"2e:43:7f:d1:e8:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:20:00.356289 containerd[2539]: 2026-01-16 21:20:00.352 [INFO][6220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" Namespace="calico-system" Pod="calico-kube-controllers-6d6d7b95fc-88tnz" WorkloadEndpoint="ci--4580.0.0--p--452f1e7704-k8s-calico--kube--controllers--6d6d7b95fc--88tnz-eth0" Jan 16 21:20:00.364000 audit[6247]: NETFILTER_CFG table=filter:145 family=2 entries=56 op=nft_register_chain pid=6247 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:20:00.364000 audit[6247]: SYSCALL arch=c000003e syscall=46 success=yes exit=25484 a0=3 a1=7ffecd9c09d0 a2=0 a3=7ffecd9c09bc items=0 ppid=5417 pid=6247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.364000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:20:00.388157 containerd[2539]: time="2026-01-16T21:20:00.388134004Z" level=info msg="connecting to shim a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec" address="unix:///run/containerd/s/1f2d1686a11066b24244ec825a55b4fd2192af7260a387f2aea33dda2fa03595" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:20:00.408007 systemd[1]: Started cri-containerd-a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec.scope - libcontainer container a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec. Jan 16 21:20:00.412235 kubelet[3994]: E0116 21:20:00.411615 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:20:00.415117 containerd[2539]: time="2026-01-16T21:20:00.414782324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:00.418797 containerd[2539]: time="2026-01-16T21:20:00.418761255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:20:00.418887 containerd[2539]: time="2026-01-16T21:20:00.418875189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:00.419049 kubelet[3994]: E0116 21:20:00.419016 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:20:00.419153 kubelet[3994]: E0116 21:20:00.419122 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:20:00.419336 kubelet[3994]: E0116 21:20:00.419305 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjs4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:00.420732 kubelet[3994]: E0116 21:20:00.420683 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:20:00.420000 audit: BPF prog-id=279 op=LOAD Jan 16 21:20:00.420000 audit: BPF prog-id=280 op=LOAD Jan 16 21:20:00.420000 audit[6269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=6256 pid=6269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656162613330623361393139336265633331313638313332653836 Jan 16 21:20:00.420000 audit: BPF prog-id=280 op=UNLOAD Jan 16 21:20:00.420000 audit[6269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6256 pid=6269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656162613330623361393139336265633331313638313332653836 Jan 16 21:20:00.420000 audit: BPF prog-id=281 op=LOAD Jan 16 21:20:00.420000 audit[6269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=6256 pid=6269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656162613330623361393139336265633331313638313332653836 Jan 16 21:20:00.420000 audit: BPF prog-id=282 op=LOAD Jan 16 21:20:00.420000 audit[6269]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=6256 pid=6269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656162613330623361393139336265633331313638313332653836 Jan 16 21:20:00.420000 audit: BPF prog-id=282 op=UNLOAD Jan 16 21:20:00.420000 audit[6269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6256 pid=6269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656162613330623361393139336265633331313638313332653836 Jan 16 21:20:00.420000 audit: BPF prog-id=281 op=UNLOAD Jan 16 21:20:00.420000 audit[6269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6256 pid=6269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656162613330623361393139336265633331313638313332653836 Jan 16 21:20:00.420000 audit: BPF prog-id=283 op=LOAD Jan 16 21:20:00.420000 audit[6269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=6256 pid=6269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656162613330623361393139336265633331313638313332653836 Jan 16 21:20:00.451881 containerd[2539]: time="2026-01-16T21:20:00.451865585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d7b95fc-88tnz,Uid:010e89e0-6574-4783-aa50-97e803ab00dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2eaba30b3a9193bec31168132e86c7f78bd581e5dc209f81bb79133c63e91ec\"" Jan 16 21:20:00.452986 containerd[2539]: time="2026-01-16T21:20:00.452949503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:20:00.532965 systemd-networkd[2320]: calif7bc255a088: Gained IPv6LL Jan 16 21:20:00.648000 audit[6295]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=6295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:20:00.648000 audit[6295]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc3a05030 a2=0 a3=7ffcc3a0501c items=0 ppid=4151 pid=6295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:20:00.651000 audit[6295]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=6295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:20:00.651000 audit[6295]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcc3a05030 a2=0 a3=7ffcc3a0501c items=0 ppid=4151 pid=6295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:20:00.707047 containerd[2539]: time="2026-01-16T21:20:00.706945837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:00.709745 containerd[2539]: time="2026-01-16T21:20:00.709711855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:20:00.709817 containerd[2539]: time="2026-01-16T21:20:00.709765529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:00.709870 kubelet[3994]: E0116 21:20:00.709848 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:20:00.709899 kubelet[3994]: E0116 21:20:00.709875 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:20:00.710099 kubelet[3994]: E0116 21:20:00.709982 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dbrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d6d7b95fc-88tnz_calico-system(010e89e0-6574-4783-aa50-97e803ab00dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:00.711140 kubelet[3994]: E0116 21:20:00.711117 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:20:01.412880 kubelet[3994]: E0116 21:20:01.412443 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:20:01.413564 kubelet[3994]: E0116 21:20:01.413535 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:20:01.414418 kubelet[3994]: E0116 21:20:01.414378 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:20:01.492915 systemd-networkd[2320]: cali3e5f2701665: Gained IPv6LL Jan 16 21:20:01.493531 systemd-networkd[2320]: cali84ec5d745ce: Gained IPv6LL Jan 16 21:20:02.414455 kubelet[3994]: E0116 21:20:02.414193 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:20:04.254149 containerd[2539]: time="2026-01-16T21:20:04.253820507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:20:04.511337 containerd[2539]: time="2026-01-16T21:20:04.511270963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:04.514032 containerd[2539]: time="2026-01-16T21:20:04.514009780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:20:04.514071 containerd[2539]: time="2026-01-16T21:20:04.514025017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:04.514221 kubelet[3994]: E0116 21:20:04.514128 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:20:04.514221 kubelet[3994]: E0116 21:20:04.514167 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:20:04.514479 kubelet[3994]: E0116 21:20:04.514255 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4e06177460524888ac9b32b3cfd69bc3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4gwbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589885c4ff-m24sm_calico-system(6049a9b7-c23b-4cd6-b5e0-33d5a278ce35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:04.516529 containerd[2539]: time="2026-01-16T21:20:04.516500038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:20:04.760965 containerd[2539]: time="2026-01-16T21:20:04.760943573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:04.766935 containerd[2539]: time="2026-01-16T21:20:04.766887233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:20:04.766935 containerd[2539]: time="2026-01-16T21:20:04.766929922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:04.767088 kubelet[3994]: E0116 21:20:04.767061 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:20:04.767152 kubelet[3994]: E0116 21:20:04.767142 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:20:04.767392 kubelet[3994]: E0116 21:20:04.767295 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gwbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589885c4ff-m24sm_calico-system(6049a9b7-c23b-4cd6-b5e0-33d5a278ce35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:04.769074 kubelet[3994]: E0116 21:20:04.769027 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:20:09.253189 containerd[2539]: time="2026-01-16T21:20:09.253128456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:20:09.523683 containerd[2539]: time="2026-01-16T21:20:09.523506753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:09.529350 containerd[2539]: time="2026-01-16T21:20:09.529318263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:20:09.529423 containerd[2539]: time="2026-01-16T21:20:09.529376560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:09.529499 kubelet[3994]: E0116 21:20:09.529460 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:09.529905 kubelet[3994]: E0116 21:20:09.529508 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:09.529905 kubelet[3994]: E0116 21:20:09.529624 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dm8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fcf4db84-9ph5q_calico-apiserver(2ea28179-07a2-4a0c-9cc7-b4eadca6090c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:09.531352 kubelet[3994]: E0116 21:20:09.530868 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:20:12.253027 containerd[2539]: time="2026-01-16T21:20:12.252972629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:20:12.499038 containerd[2539]: time="2026-01-16T21:20:12.498999555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:12.501177 containerd[2539]: time="2026-01-16T21:20:12.501151859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:20:12.501252 containerd[2539]: time="2026-01-16T21:20:12.501163544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:12.501334 kubelet[3994]: E0116 21:20:12.501287 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:20:12.501524 kubelet[3994]: E0116 21:20:12.501340 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:20:12.501524 kubelet[3994]: E0116 21:20:12.501453 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbbx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x8pr8_calico-system(2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:12.502812 kubelet[3994]: E0116 21:20:12.502762 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:20:13.253367 containerd[2539]: time="2026-01-16T21:20:13.253338213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:20:15.202970 containerd[2539]: time="2026-01-16T21:20:15.202933758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:16.576571 containerd[2539]: time="2026-01-16T21:20:16.576503507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:20:16.576571 containerd[2539]: time="2026-01-16T21:20:16.576555680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:16.576798 kubelet[3994]: E0116 21:20:16.576619 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:20:16.576798 kubelet[3994]: E0116 21:20:16.576646 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:20:16.576992 kubelet[3994]: E0116 21:20:16.576800 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjs4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:16.577799 containerd[2539]: time="2026-01-16T21:20:16.577598255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:20:17.682128 containerd[2539]: time="2026-01-16T21:20:17.682104073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:19.073972 containerd[2539]: time="2026-01-16T21:20:19.073879906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:20:19.073972 containerd[2539]: time="2026-01-16T21:20:19.073902437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:19.074217 kubelet[3994]: E0116 21:20:19.074118 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:19.074217 kubelet[3994]: E0116 21:20:19.074147 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:19.074395 containerd[2539]: time="2026-01-16T21:20:19.074328813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:20:19.074696 kubelet[3994]: E0116 21:20:19.074639 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntzvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d4c957-cnh4g_calico-apiserver(be8f7779-a5b2-41ff-909f-1387d7e3242a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:19.076074 kubelet[3994]: E0116 21:20:19.076028 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:20:19.253526 kubelet[3994]: E0116 21:20:19.253437 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:20:20.355626 containerd[2539]: time="2026-01-16T21:20:20.355588685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:20.878824 containerd[2539]: time="2026-01-16T21:20:20.878776349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:20.879298 containerd[2539]: time="2026-01-16T21:20:20.878860440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:20:20.879354 kubelet[3994]: E0116 21:20:20.879311 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:20.879621 kubelet[3994]: E0116 21:20:20.879365 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:20.879621 kubelet[3994]: E0116 21:20:20.879578 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m69pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d4c957-8nr7d_calico-apiserver(8b6a9edf-78a2-4eb2-9228-633a08a758ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:20.880951 kubelet[3994]: E0116 21:20:20.880915 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:20:20.884272 containerd[2539]: time="2026-01-16T21:20:20.884093196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:20:21.607672 containerd[2539]: time="2026-01-16T21:20:21.607632911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:21.765387 containerd[2539]: time="2026-01-16T21:20:21.765325271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:20:21.765823 containerd[2539]: time="2026-01-16T21:20:21.765394891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:21.765888 kubelet[3994]: E0116 21:20:21.765553 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:20:21.765888 kubelet[3994]: E0116 21:20:21.765585 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:20:21.766226 containerd[2539]: time="2026-01-16T21:20:21.766163795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:20:21.766325 kubelet[3994]: E0116 21:20:21.766202 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dbrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d6d7b95fc-88tnz_calico-system(010e89e0-6574-4783-aa50-97e803ab00dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:21.767710 kubelet[3994]: E0116 21:20:21.767642 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:20:22.314200 containerd[2539]: time="2026-01-16T21:20:22.314172861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:23.253087 kubelet[3994]: E0116 21:20:23.253034 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:20:23.928850 containerd[2539]: time="2026-01-16T21:20:23.926229694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:20:23.929197 containerd[2539]: time="2026-01-16T21:20:23.929178488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:23.932258 kubelet[3994]: E0116 21:20:23.929304 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:20:23.932258 kubelet[3994]: E0116 21:20:23.931891 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:20:23.932399 kubelet[3994]: E0116 21:20:23.932014 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjs4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:23.933790 kubelet[3994]: E0116 21:20:23.933762 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:20:24.257586 kubelet[3994]: E0116 21:20:24.257550 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:20:31.253889 containerd[2539]: time="2026-01-16T21:20:31.253816757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:20:31.504642 containerd[2539]: time="2026-01-16T21:20:31.504114195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:31.506829 containerd[2539]: time="2026-01-16T21:20:31.506727844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:20:31.506829 containerd[2539]: time="2026-01-16T21:20:31.506796047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:31.507007 kubelet[3994]: E0116 21:20:31.506910 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:20:31.507007 kubelet[3994]: E0116 21:20:31.506958 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:20:31.507698 kubelet[3994]: E0116 21:20:31.507065 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4e06177460524888ac9b32b3cfd69bc3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4gwbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589885c4ff-m24sm_calico-system(6049a9b7-c23b-4cd6-b5e0-33d5a278ce35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:31.509527 containerd[2539]: time="2026-01-16T21:20:31.509108967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:20:31.751646 containerd[2539]: time="2026-01-16T21:20:31.751606653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:31.755191 containerd[2539]: time="2026-01-16T21:20:31.755114017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:20:31.755459 containerd[2539]: time="2026-01-16T21:20:31.755181399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:31.755916 kubelet[3994]: E0116 21:20:31.755829 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:20:31.756077 kubelet[3994]: E0116 21:20:31.755911 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:20:31.756077 kubelet[3994]: E0116 21:20:31.756027 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gwbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589885c4ff-m24sm_calico-system(6049a9b7-c23b-4cd6-b5e0-33d5a278ce35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:31.757349 kubelet[3994]: E0116 21:20:31.757294 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:20:33.253457 kubelet[3994]: E0116 21:20:33.253336 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:20:33.586132 systemd[1]: Started sshd@7-10.200.8.41:22-10.200.16.10:60780.service - OpenSSH per-connection server daemon (10.200.16.10:60780). Jan 16 21:20:33.590862 kernel: kauditd_printk_skb: 192 callbacks suppressed Jan 16 21:20:33.590945 kernel: audit: type=1130 audit(1768598433.585:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.41:22-10.200.16.10:60780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:33.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.41:22-10.200.16.10:60780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:34.141000 audit[6355]: USER_ACCT pid=6355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.145766 sshd-session[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:34.146785 sshd[6355]: Accepted publickey for core from 10.200.16.10 port 60780 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:20:34.146921 kernel: audit: type=1101 audit(1768598434.141:767): pid=6355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.143000 audit[6355]: CRED_ACQ pid=6355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.153183 kernel: audit: type=1103 audit(1768598434.143:768): pid=6355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.153252 kernel: audit: type=1006 audit(1768598434.144:769): pid=6355 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 16 21:20:34.156136 kernel: audit: type=1300 audit(1768598434.144:769): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcec5c9de0 a2=3 a3=0 items=0 ppid=1 pid=6355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:34.144000 audit[6355]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcec5c9de0 a2=3 a3=0 items=0 ppid=1 pid=6355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:34.159961 systemd-logind[2504]: New session 11 of user core. Jan 16 21:20:34.164506 kernel: audit: type=1327 audit(1768598434.144:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:34.144000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:34.170015 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 21:20:34.171000 audit[6355]: USER_START pid=6355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.173000 audit[6359]: CRED_ACQ pid=6359 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.182146 kernel: audit: type=1105 audit(1768598434.171:770): pid=6355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.182215 kernel: audit: type=1103 audit(1768598434.173:771): pid=6359 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.253494 kubelet[3994]: E0116 21:20:34.253444 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:20:34.255214 kubelet[3994]: E0116 21:20:34.255169 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:20:34.511537 sshd[6359]: Connection closed by 10.200.16.10 port 60780 Jan 16 21:20:34.513479 sshd-session[6355]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:34.513000 audit[6355]: USER_END pid=6355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.516880 systemd-logind[2504]: Session 11 logged out. Waiting for processes to exit. Jan 16 21:20:34.517892 systemd[1]: sshd@7-10.200.8.41:22-10.200.16.10:60780.service: Deactivated successfully. Jan 16 21:20:34.520016 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 21:20:34.521954 systemd-logind[2504]: Removed session 11. Jan 16 21:20:34.513000 audit[6355]: CRED_DISP pid=6355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.528371 kernel: audit: type=1106 audit(1768598434.513:772): pid=6355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.528437 kernel: audit: type=1104 audit(1768598434.513:773): pid=6355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:34.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.41:22-10.200.16.10:60780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:35.253791 containerd[2539]: time="2026-01-16T21:20:35.253651818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:20:35.254629 kubelet[3994]: E0116 21:20:35.254558 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:20:35.503233 containerd[2539]: time="2026-01-16T21:20:35.503204725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:35.505789 containerd[2539]: time="2026-01-16T21:20:35.505660045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:20:35.505789 containerd[2539]: time="2026-01-16T21:20:35.505734085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:35.506000 kubelet[3994]: E0116 21:20:35.505977 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:20:35.506074 kubelet[3994]: E0116 21:20:35.506064 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:20:35.506243 kubelet[3994]: E0116 21:20:35.506205 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbbx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x8pr8_calico-system(2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:35.507617 kubelet[3994]: E0116 21:20:35.507594 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:20:39.253396 containerd[2539]: time="2026-01-16T21:20:39.253329217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:20:39.500562 containerd[2539]: time="2026-01-16T21:20:39.500526528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:39.502994 containerd[2539]: time="2026-01-16T21:20:39.502899123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:20:39.502994 containerd[2539]: time="2026-01-16T21:20:39.502966920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:39.504074 kubelet[3994]: E0116 21:20:39.503921 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:39.504074 kubelet[3994]: E0116 21:20:39.503983 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:39.504538 kubelet[3994]: E0116 21:20:39.504479 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dm8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fcf4db84-9ph5q_calico-apiserver(2ea28179-07a2-4a0c-9cc7-b4eadca6090c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:39.505665 kubelet[3994]: E0116 21:20:39.505627 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:20:39.630346 systemd[1]: Started sshd@8-10.200.8.41:22-10.200.16.10:37178.service - OpenSSH per-connection server daemon (10.200.16.10:37178). Jan 16 21:20:39.632571 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:20:39.632697 kernel: audit: type=1130 audit(1768598439.628:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.41:22-10.200.16.10:37178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:39.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.41:22-10.200.16.10:37178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:40.275000 audit[6372]: USER_ACCT pid=6372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.280054 sshd-session[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:40.281190 sshd[6372]: Accepted publickey for core from 10.200.16.10 port 37178 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:20:40.289599 kernel: audit: type=1101 audit(1768598440.275:776): pid=6372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.289671 kernel: audit: type=1103 audit(1768598440.277:777): pid=6372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.289691 kernel: audit: type=1006 audit(1768598440.277:778): pid=6372 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 16 21:20:40.277000 audit[6372]: CRED_ACQ pid=6372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.287362 systemd-logind[2504]: New session 12 of user core. Jan 16 21:20:40.277000 audit[6372]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff98c5bce0 a2=3 a3=0 items=0 ppid=1 pid=6372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:40.294536 kernel: audit: type=1300 audit(1768598440.277:778): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff98c5bce0 a2=3 a3=0 items=0 ppid=1 pid=6372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:40.277000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:40.296624 kernel: audit: type=1327 audit(1768598440.277:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:40.299285 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 21:20:40.299000 audit[6372]: USER_START pid=6372 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.305859 kernel: audit: type=1105 audit(1768598440.299:779): pid=6372 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.304000 audit[6376]: CRED_ACQ pid=6376 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.311862 kernel: audit: type=1103 audit(1768598440.304:780): pid=6376 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.665633 sshd[6376]: Connection closed by 10.200.16.10 port 37178 Jan 16 21:20:40.665986 sshd-session[6372]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:40.666000 audit[6372]: USER_END pid=6372 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.670007 systemd-logind[2504]: Session 12 logged out. Waiting for processes to exit. Jan 16 21:20:40.671589 systemd[1]: sshd@8-10.200.8.41:22-10.200.16.10:37178.service: Deactivated successfully. Jan 16 21:20:40.674031 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 21:20:40.675869 systemd-logind[2504]: Removed session 12. Jan 16 21:20:40.666000 audit[6372]: CRED_DISP pid=6372 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.677067 kernel: audit: type=1106 audit(1768598440.666:781): pid=6372 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.677194 kernel: audit: type=1104 audit(1768598440.666:782): pid=6372 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:40.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.41:22-10.200.16.10:37178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:45.254306 kubelet[3994]: E0116 21:20:45.254255 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:20:45.785009 systemd[1]: Started sshd@9-10.200.8.41:22-10.200.16.10:37186.service - OpenSSH per-connection server daemon (10.200.16.10:37186). Jan 16 21:20:45.790864 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:20:45.790948 kernel: audit: type=1130 audit(1768598445.784:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.41:22-10.200.16.10:37186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:45.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.41:22-10.200.16.10:37186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:46.396758 sshd[6391]: Accepted publickey for core from 10.200.16.10 port 37186 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:20:46.395000 audit[6391]: USER_ACCT pid=6391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.404854 kernel: audit: type=1101 audit(1768598446.395:785): pid=6391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.405916 sshd-session[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:46.404000 audit[6391]: CRED_ACQ pid=6391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.413861 kernel: audit: type=1103 audit(1768598446.404:786): pid=6391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.418898 systemd-logind[2504]: New session 13 of user core. Jan 16 21:20:46.424991 kernel: audit: type=1006 audit(1768598446.404:787): pid=6391 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 16 21:20:46.404000 audit[6391]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb20a3c70 a2=3 a3=0 items=0 ppid=1 pid=6391 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:46.435784 kernel: audit: type=1300 audit(1768598446.404:787): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb20a3c70 a2=3 a3=0 items=0 ppid=1 pid=6391 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:46.435866 kernel: audit: type=1327 audit(1768598446.404:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:46.404000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:46.436134 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 21:20:46.448976 kernel: audit: type=1105 audit(1768598446.438:788): pid=6391 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.438000 audit[6391]: USER_START pid=6391 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.456937 kernel: audit: type=1103 audit(1768598446.448:789): pid=6397 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.448000 audit[6397]: CRED_ACQ pid=6397 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.865731 sshd[6397]: Connection closed by 10.200.16.10 port 37186 Jan 16 21:20:46.866163 sshd-session[6391]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:46.866000 audit[6391]: USER_END pid=6391 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.870827 systemd[1]: sshd@9-10.200.8.41:22-10.200.16.10:37186.service: Deactivated successfully. Jan 16 21:20:46.873432 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 21:20:46.875892 kernel: audit: type=1106 audit(1768598446.866:790): pid=6391 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.866000 audit[6391]: CRED_DISP pid=6391 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.877339 systemd-logind[2504]: Session 13 logged out. Waiting for processes to exit. Jan 16 21:20:46.879004 systemd-logind[2504]: Removed session 13. Jan 16 21:20:46.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.41:22-10.200.16.10:37186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:46.880876 kernel: audit: type=1104 audit(1768598446.866:791): pid=6391 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:46.980233 systemd[1]: Started sshd@10-10.200.8.41:22-10.200.16.10:37188.service - OpenSSH per-connection server daemon (10.200.16.10:37188). Jan 16 21:20:46.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.41:22-10.200.16.10:37188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:47.253732 containerd[2539]: time="2026-01-16T21:20:47.253632973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:20:47.509504 containerd[2539]: time="2026-01-16T21:20:47.509370149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:47.512377 containerd[2539]: time="2026-01-16T21:20:47.512111281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:20:47.512377 containerd[2539]: time="2026-01-16T21:20:47.512173609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:47.512471 kubelet[3994]: E0116 21:20:47.512268 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:47.512471 kubelet[3994]: E0116 21:20:47.512325 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:47.513158 kubelet[3994]: E0116 21:20:47.513110 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntzvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d4c957-cnh4g_calico-apiserver(be8f7779-a5b2-41ff-909f-1387d7e3242a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:47.514285 kubelet[3994]: E0116 21:20:47.514259 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:20:47.531000 audit[6410]: USER_ACCT pid=6410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:47.532518 sshd[6410]: Accepted publickey for core from 10.200.16.10 port 37188 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:20:47.532000 audit[6410]: CRED_ACQ pid=6410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:47.532000 audit[6410]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0bdb0b30 a2=3 a3=0 items=0 ppid=1 pid=6410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:47.532000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:47.533888 sshd-session[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:47.537955 systemd-logind[2504]: New session 14 of user core. Jan 16 21:20:47.542004 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 21:20:47.543000 audit[6410]: USER_START pid=6410 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:47.544000 audit[6439]: CRED_ACQ pid=6439 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:47.912339 sshd[6439]: Connection closed by 10.200.16.10 port 37188 Jan 16 21:20:47.916389 sshd-session[6410]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:47.916000 audit[6410]: USER_END pid=6410 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:47.916000 audit[6410]: CRED_DISP pid=6410 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:47.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.41:22-10.200.16.10:37188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:47.918983 systemd[1]: sshd@10-10.200.8.41:22-10.200.16.10:37188.service: Deactivated successfully. Jan 16 21:20:47.920436 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 21:20:47.923932 systemd-logind[2504]: Session 14 logged out. Waiting for processes to exit. Jan 16 21:20:47.926501 systemd-logind[2504]: Removed session 14. Jan 16 21:20:48.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.41:22-10.200.16.10:37202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:48.024065 systemd[1]: Started sshd@11-10.200.8.41:22-10.200.16.10:37202.service - OpenSSH per-connection server daemon (10.200.16.10:37202). Jan 16 21:20:48.256848 containerd[2539]: time="2026-01-16T21:20:48.255823954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:20:48.511859 containerd[2539]: time="2026-01-16T21:20:48.511702419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:48.514439 containerd[2539]: time="2026-01-16T21:20:48.514302104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:20:48.514555 containerd[2539]: time="2026-01-16T21:20:48.514336678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:48.514782 kubelet[3994]: E0116 21:20:48.514623 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:20:48.514782 kubelet[3994]: E0116 21:20:48.514682 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:20:48.515218 containerd[2539]: time="2026-01-16T21:20:48.515200171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:20:48.515588 kubelet[3994]: E0116 21:20:48.515269 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjs4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:48.585000 audit[6449]: USER_ACCT pid=6449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:48.586388 sshd[6449]: Accepted publickey for core from 10.200.16.10 port 37202 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:20:48.586000 audit[6449]: CRED_ACQ pid=6449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:48.586000 audit[6449]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc632c350 a2=3 a3=0 items=0 ppid=1 pid=6449 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:48.586000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:48.587845 sshd-session[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:48.591621 systemd-logind[2504]: New session 15 of user core. Jan 16 21:20:48.602118 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 21:20:48.604000 audit[6449]: USER_START pid=6449 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:48.606000 audit[6453]: CRED_ACQ pid=6453 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:48.758713 containerd[2539]: time="2026-01-16T21:20:48.758588444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:48.761019 containerd[2539]: time="2026-01-16T21:20:48.760974715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:20:48.761090 containerd[2539]: time="2026-01-16T21:20:48.761080150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:48.761222 kubelet[3994]: E0116 21:20:48.761184 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:20:48.761222 kubelet[3994]: E0116 21:20:48.761217 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:20:48.761448 kubelet[3994]: E0116 21:20:48.761400 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dbrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d6d7b95fc-88tnz_calico-system(010e89e0-6574-4783-aa50-97e803ab00dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:48.761999 containerd[2539]: time="2026-01-16T21:20:48.761826598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:20:48.762721 kubelet[3994]: E0116 21:20:48.762683 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:20:48.949330 sshd[6453]: Connection closed by 10.200.16.10 port 37202 Jan 16 21:20:48.949724 sshd-session[6449]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:48.950000 audit[6449]: USER_END pid=6449 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:48.950000 audit[6449]: CRED_DISP pid=6449 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:48.953366 systemd[1]: sshd@11-10.200.8.41:22-10.200.16.10:37202.service: Deactivated successfully. Jan 16 21:20:48.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.41:22-10.200.16.10:37202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:48.956134 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 21:20:48.958147 systemd-logind[2504]: Session 15 logged out. Waiting for processes to exit. Jan 16 21:20:48.958828 systemd-logind[2504]: Removed session 15. Jan 16 21:20:49.018190 containerd[2539]: time="2026-01-16T21:20:49.018122777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:49.020457 containerd[2539]: time="2026-01-16T21:20:49.020419148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:20:49.020457 containerd[2539]: time="2026-01-16T21:20:49.020443131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:49.020588 kubelet[3994]: E0116 21:20:49.020564 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:49.020632 kubelet[3994]: E0116 21:20:49.020589 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:20:49.020812 containerd[2539]: time="2026-01-16T21:20:49.020781150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:20:49.021094 kubelet[3994]: E0116 21:20:49.021051 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m69pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d4c957-8nr7d_calico-apiserver(8b6a9edf-78a2-4eb2-9228-633a08a758ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:49.022436 kubelet[3994]: E0116 21:20:49.022405 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:20:49.318113 containerd[2539]: time="2026-01-16T21:20:49.318044024Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:20:49.320357 containerd[2539]: time="2026-01-16T21:20:49.320333585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:20:49.320435 containerd[2539]: time="2026-01-16T21:20:49.320377931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:49.320470 kubelet[3994]: E0116 21:20:49.320452 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:20:49.320498 kubelet[3994]: E0116 21:20:49.320489 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:20:49.320743 kubelet[3994]: E0116 21:20:49.320601 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjs4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:20:49.321918 kubelet[3994]: E0116 21:20:49.321899 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:20:50.256104 kubelet[3994]: E0116 21:20:50.255357 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:20:54.063559 systemd[1]: Started sshd@12-10.200.8.41:22-10.200.16.10:48036.service - OpenSSH per-connection server daemon (10.200.16.10:48036). Jan 16 21:20:54.070325 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 16 21:20:54.070397 kernel: audit: type=1130 audit(1768598454.062:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.41:22-10.200.16.10:48036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:54.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.41:22-10.200.16.10:48036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:54.255088 kubelet[3994]: E0116 21:20:54.254323 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:20:54.621872 kernel: audit: type=1101 audit(1768598454.615:812): pid=6469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:54.615000 audit[6469]: USER_ACCT pid=6469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:54.622019 sshd[6469]: Accepted publickey for core from 10.200.16.10 port 48036 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:20:54.622731 sshd-session[6469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:54.620000 audit[6469]: CRED_ACQ pid=6469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:54.630846 kernel: audit: type=1103 audit(1768598454.620:813): pid=6469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:54.633644 systemd-logind[2504]: New session 16 of user core. Jan 16 21:20:54.620000 audit[6469]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbee8b0c0 a2=3 a3=0 items=0 ppid=1 pid=6469 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:54.643822 kernel: audit: type=1006 audit(1768598454.620:814): pid=6469 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 16 21:20:54.643903 kernel: audit: type=1300 audit(1768598454.620:814): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbee8b0c0 a2=3 a3=0 items=0 ppid=1 pid=6469 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:54.644174 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 21:20:54.647891 kernel: audit: type=1327 audit(1768598454.620:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:54.620000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:54.648000 audit[6469]: USER_START pid=6469 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:54.659466 kernel: audit: type=1105 audit(1768598454.648:815): pid=6469 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:54.659525 kernel: audit: type=1103 audit(1768598454.652:816): pid=6473 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:54.652000 audit[6473]: CRED_ACQ pid=6473 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:55.025248 sshd[6473]: Connection closed by 10.200.16.10 port 48036 Jan 16 21:20:55.025634 sshd-session[6469]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:55.026000 audit[6469]: USER_END pid=6469 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:55.029348 systemd[1]: sshd@12-10.200.8.41:22-10.200.16.10:48036.service: Deactivated successfully. Jan 16 21:20:55.032052 systemd-logind[2504]: Session 16 logged out. Waiting for processes to exit. Jan 16 21:20:55.026000 audit[6469]: CRED_DISP pid=6469 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:55.035973 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 21:20:55.037427 systemd-logind[2504]: Removed session 16. Jan 16 21:20:55.039995 kernel: audit: type=1106 audit(1768598455.026:817): pid=6469 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:55.040050 kernel: audit: type=1104 audit(1768598455.026:818): pid=6469 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:20:55.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.41:22-10.200.16.10:48036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:57.253723 kubelet[3994]: E0116 21:20:57.253594 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:20:59.252590 kubelet[3994]: E0116 21:20:59.252553 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:21:00.142747 systemd[1]: Started sshd@13-10.200.8.41:22-10.200.16.10:59494.service - OpenSSH per-connection server daemon (10.200.16.10:59494). Jan 16 21:21:00.145073 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:00.145272 kernel: audit: type=1130 audit(1768598460.142:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.41:22-10.200.16.10:59494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:00.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.41:22-10.200.16.10:59494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:00.255159 kubelet[3994]: E0116 21:21:00.255122 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:21:00.692000 audit[6488]: USER_ACCT pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:00.698538 kernel: audit: type=1101 audit(1768598460.692:821): pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:00.698607 sshd[6488]: Accepted publickey for core from 10.200.16.10 port 59494 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:00.699821 sshd-session[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:00.698000 audit[6488]: CRED_ACQ pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:00.705908 kernel: audit: type=1103 audit(1768598460.698:822): pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:00.713914 kernel: audit: type=1006 audit(1768598460.698:823): pid=6488 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 16 21:21:00.713957 kernel: audit: type=1300 audit(1768598460.698:823): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6b446e50 a2=3 a3=0 items=0 ppid=1 pid=6488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:00.698000 audit[6488]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6b446e50 a2=3 a3=0 items=0 ppid=1 pid=6488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:00.698000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:00.715856 kernel: audit: type=1327 audit(1768598460.698:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:00.718035 systemd-logind[2504]: New session 17 of user core. Jan 16 21:21:00.725028 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 21:21:00.727000 audit[6488]: USER_START pid=6488 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:00.738808 kernel: audit: type=1105 audit(1768598460.727:824): pid=6488 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:00.738886 kernel: audit: type=1103 audit(1768598460.733:825): pid=6492 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:00.733000 audit[6492]: CRED_ACQ pid=6492 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:01.071499 sshd[6492]: Connection closed by 10.200.16.10 port 59494 Jan 16 21:21:01.071869 sshd-session[6488]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:01.072000 audit[6488]: USER_END pid=6488 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:01.081511 kernel: audit: type=1106 audit(1768598461.072:826): pid=6488 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:01.082236 systemd-logind[2504]: Session 17 logged out. Waiting for processes to exit. Jan 16 21:21:01.082654 systemd[1]: sshd@13-10.200.8.41:22-10.200.16.10:59494.service: Deactivated successfully. Jan 16 21:21:01.085248 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 21:21:01.091890 kernel: audit: type=1104 audit(1768598461.072:827): pid=6488 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:01.072000 audit[6488]: CRED_DISP pid=6488 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:01.092546 systemd-logind[2504]: Removed session 17. Jan 16 21:21:01.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.41:22-10.200.16.10:59494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:02.255283 kubelet[3994]: E0116 21:21:02.255250 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:21:04.256036 kubelet[3994]: E0116 21:21:04.255990 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:21:04.257922 kubelet[3994]: E0116 21:21:04.257556 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:21:06.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.41:22-10.200.16.10:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:06.186272 systemd[1]: Started sshd@14-10.200.8.41:22-10.200.16.10:59500.service - OpenSSH per-connection server daemon (10.200.16.10:59500). Jan 16 21:21:06.187505 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:06.187545 kernel: audit: type=1130 audit(1768598466.185:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.41:22-10.200.16.10:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:06.735000 audit[6505]: USER_ACCT pid=6505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:06.738156 sshd-session[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:06.740664 sshd[6505]: Accepted publickey for core from 10.200.16.10 port 59500 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:06.735000 audit[6505]: CRED_ACQ pid=6505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:06.744855 kernel: audit: type=1101 audit(1768598466.735:830): pid=6505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:06.744910 kernel: audit: type=1103 audit(1768598466.735:831): pid=6505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:06.749256 systemd-logind[2504]: New session 18 of user core. Jan 16 21:21:06.749708 kernel: audit: type=1006 audit(1768598466.735:832): pid=6505 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 16 21:21:06.735000 audit[6505]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc46d8370 a2=3 a3=0 items=0 ppid=1 pid=6505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:06.755711 kernel: audit: type=1300 audit(1768598466.735:832): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc46d8370 a2=3 a3=0 items=0 ppid=1 pid=6505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:06.756374 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 21:21:06.735000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:06.759597 kernel: audit: type=1327 audit(1768598466.735:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:06.760000 audit[6505]: USER_START pid=6505 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:06.765000 audit[6509]: CRED_ACQ pid=6509 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:06.769569 kernel: audit: type=1105 audit(1768598466.760:833): pid=6505 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:06.769634 kernel: audit: type=1103 audit(1768598466.765:834): pid=6509 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:07.095900 sshd[6509]: Connection closed by 10.200.16.10 port 59500 Jan 16 21:21:07.095968 sshd-session[6505]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:07.096000 audit[6505]: USER_END pid=6505 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:07.104876 kernel: audit: type=1106 audit(1768598467.096:835): pid=6505 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:07.105242 systemd[1]: sshd@14-10.200.8.41:22-10.200.16.10:59500.service: Deactivated successfully. Jan 16 21:21:07.107265 systemd-logind[2504]: Session 18 logged out. Waiting for processes to exit. Jan 16 21:21:07.108140 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 21:21:07.111432 systemd-logind[2504]: Removed session 18. Jan 16 21:21:07.096000 audit[6505]: CRED_DISP pid=6505 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:07.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.41:22-10.200.16.10:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:07.116848 kernel: audit: type=1104 audit(1768598467.096:836): pid=6505 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:07.205356 systemd[1]: Started sshd@15-10.200.8.41:22-10.200.16.10:59510.service - OpenSSH per-connection server daemon (10.200.16.10:59510). Jan 16 21:21:07.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.41:22-10.200.16.10:59510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:07.766000 audit[6521]: USER_ACCT pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:07.767350 sshd[6521]: Accepted publickey for core from 10.200.16.10 port 59510 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:07.767000 audit[6521]: CRED_ACQ pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:07.767000 audit[6521]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4b474820 a2=3 a3=0 items=0 ppid=1 pid=6521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:07.767000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:07.768657 sshd-session[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:07.773130 systemd-logind[2504]: New session 19 of user core. Jan 16 21:21:07.777994 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 21:21:07.779000 audit[6521]: USER_START pid=6521 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:07.780000 audit[6526]: CRED_ACQ pid=6526 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:08.198712 sshd[6526]: Connection closed by 10.200.16.10 port 59510 Jan 16 21:21:08.199054 sshd-session[6521]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:08.199000 audit[6521]: USER_END pid=6521 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:08.199000 audit[6521]: CRED_DISP pid=6521 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:08.201559 systemd[1]: sshd@15-10.200.8.41:22-10.200.16.10:59510.service: Deactivated successfully. Jan 16 21:21:08.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.41:22-10.200.16.10:59510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:08.203658 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 21:21:08.205817 systemd-logind[2504]: Session 19 logged out. Waiting for processes to exit. Jan 16 21:21:08.207259 systemd-logind[2504]: Removed session 19. Jan 16 21:21:08.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.41:22-10.200.16.10:59518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:08.313077 systemd[1]: Started sshd@16-10.200.8.41:22-10.200.16.10:59518.service - OpenSSH per-connection server daemon (10.200.16.10:59518). Jan 16 21:21:08.855000 audit[6536]: USER_ACCT pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:08.857008 sshd[6536]: Accepted publickey for core from 10.200.16.10 port 59518 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:08.857000 audit[6536]: CRED_ACQ pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:08.857000 audit[6536]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe43237e10 a2=3 a3=0 items=0 ppid=1 pid=6536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:08.857000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:08.859178 sshd-session[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:08.863096 systemd-logind[2504]: New session 20 of user core. Jan 16 21:21:08.867990 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 21:21:08.870000 audit[6536]: USER_START pid=6536 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:08.871000 audit[6540]: CRED_ACQ pid=6540 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:09.253115 kubelet[3994]: E0116 21:21:09.253042 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:21:09.576000 audit[6550]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=6550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:09.576000 audit[6550]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdc3be5550 a2=0 a3=7ffdc3be553c items=0 ppid=4151 pid=6550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:09.582000 audit[6550]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=6550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:09.582000 audit[6550]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdc3be5550 a2=0 a3=0 items=0 ppid=4151 pid=6550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:09.606000 audit[6555]: NETFILTER_CFG table=filter:150 family=2 entries=38 op=nft_register_rule pid=6555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:09.606000 audit[6555]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffee0903d20 a2=0 a3=7ffee0903d0c items=0 ppid=4151 pid=6555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:09.614000 audit[6555]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=6555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:09.614000 audit[6555]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee0903d20 a2=0 a3=0 items=0 ppid=4151 pid=6555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:09.669766 sshd[6540]: Connection closed by 10.200.16.10 port 59518 Jan 16 21:21:09.670150 sshd-session[6536]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:09.670000 audit[6536]: USER_END pid=6536 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:09.670000 audit[6536]: CRED_DISP pid=6536 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:09.672906 systemd[1]: sshd@16-10.200.8.41:22-10.200.16.10:59518.service: Deactivated successfully. Jan 16 21:21:09.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.41:22-10.200.16.10:59518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:09.674760 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 21:21:09.675531 systemd-logind[2504]: Session 20 logged out. Waiting for processes to exit. Jan 16 21:21:09.677758 systemd-logind[2504]: Removed session 20. Jan 16 21:21:09.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.41:22-10.200.16.10:56342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:09.781307 systemd[1]: Started sshd@17-10.200.8.41:22-10.200.16.10:56342.service - OpenSSH per-connection server daemon (10.200.16.10:56342). Jan 16 21:21:10.325000 audit[6563]: USER_ACCT pid=6563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:10.326829 sshd[6563]: Accepted publickey for core from 10.200.16.10 port 56342 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:10.326000 audit[6563]: CRED_ACQ pid=6563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:10.326000 audit[6563]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0b9a1e80 a2=3 a3=0 items=0 ppid=1 pid=6563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.326000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:10.328455 sshd-session[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:10.332478 systemd-logind[2504]: New session 21 of user core. Jan 16 21:21:10.339101 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 21:21:10.341000 audit[6563]: USER_START pid=6563 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:10.343000 audit[6567]: CRED_ACQ pid=6567 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:10.803481 sshd[6567]: Connection closed by 10.200.16.10 port 56342 Jan 16 21:21:10.805164 sshd-session[6563]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:10.805000 audit[6563]: USER_END pid=6563 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:10.805000 audit[6563]: CRED_DISP pid=6563 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:10.808392 systemd-logind[2504]: Session 21 logged out. Waiting for processes to exit. Jan 16 21:21:10.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.41:22-10.200.16.10:56342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:10.809193 systemd[1]: sshd@17-10.200.8.41:22-10.200.16.10:56342.service: Deactivated successfully. Jan 16 21:21:10.812070 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 21:21:10.815026 systemd-logind[2504]: Removed session 21. Jan 16 21:21:10.918433 systemd[1]: Started sshd@18-10.200.8.41:22-10.200.16.10:56358.service - OpenSSH per-connection server daemon (10.200.16.10:56358). Jan 16 21:21:10.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.41:22-10.200.16.10:56358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:11.254137 kubelet[3994]: E0116 21:21:11.254049 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:21:11.463876 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 16 21:21:11.463937 kernel: audit: type=1101 audit(1768598471.458:870): pid=6577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.458000 audit[6577]: USER_ACCT pid=6577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.462613 sshd-session[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:11.464185 sshd[6577]: Accepted publickey for core from 10.200.16.10 port 56358 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:11.469433 kernel: audit: type=1103 audit(1768598471.460:871): pid=6577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.460000 audit[6577]: CRED_ACQ pid=6577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.472143 kernel: audit: type=1006 audit(1768598471.460:872): pid=6577 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 16 21:21:11.476527 kernel: audit: type=1300 audit(1768598471.460:872): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe04828240 a2=3 a3=0 items=0 ppid=1 pid=6577 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.460000 audit[6577]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe04828240 a2=3 a3=0 items=0 ppid=1 pid=6577 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.460000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:11.473470 systemd-logind[2504]: New session 22 of user core. Jan 16 21:21:11.476887 kernel: audit: type=1327 audit(1768598471.460:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:11.483033 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 21:21:11.484000 audit[6577]: USER_START pid=6577 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.489000 audit[6581]: CRED_ACQ pid=6581 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.493077 kernel: audit: type=1105 audit(1768598471.484:873): pid=6577 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.493119 kernel: audit: type=1103 audit(1768598471.489:874): pid=6581 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.809289 sshd[6581]: Connection closed by 10.200.16.10 port 56358 Jan 16 21:21:11.809615 sshd-session[6577]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:11.810000 audit[6577]: USER_END pid=6577 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.818920 kernel: audit: type=1106 audit(1768598471.810:875): pid=6577 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.819175 systemd[1]: sshd@18-10.200.8.41:22-10.200.16.10:56358.service: Deactivated successfully. Jan 16 21:21:11.820948 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 21:21:11.822014 systemd-logind[2504]: Session 22 logged out. Waiting for processes to exit. Jan 16 21:21:11.823707 systemd-logind[2504]: Removed session 22. Jan 16 21:21:11.810000 audit[6577]: CRED_DISP pid=6577 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.41:22-10.200.16.10:56358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:11.834699 kernel: audit: type=1104 audit(1768598471.810:876): pid=6577 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:11.834799 kernel: audit: type=1131 audit(1768598471.818:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.41:22-10.200.16.10:56358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:14.254787 kubelet[3994]: E0116 21:21:14.254545 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:21:14.257564 kubelet[3994]: E0116 21:21:14.257521 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:21:14.314000 audit[6593]: NETFILTER_CFG table=filter:152 family=2 entries=26 op=nft_register_rule pid=6593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:14.314000 audit[6593]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd63089a00 a2=0 a3=7ffd630899ec items=0 ppid=4151 pid=6593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:14.322000 audit[6593]: NETFILTER_CFG table=nat:153 family=2 entries=104 op=nft_register_chain pid=6593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:14.322000 audit[6593]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd63089a00 a2=0 a3=7ffd630899ec items=0 ppid=4151 pid=6593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:15.253204 kubelet[3994]: E0116 21:21:15.253175 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:21:15.253560 kubelet[3994]: E0116 21:21:15.253539 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:21:16.923101 systemd[1]: Started sshd@19-10.200.8.41:22-10.200.16.10:56374.service - OpenSSH per-connection server daemon (10.200.16.10:56374). Jan 16 21:21:16.929274 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 16 21:21:16.929303 kernel: audit: type=1130 audit(1768598476.921:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.41:22-10.200.16.10:56374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:16.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.41:22-10.200.16.10:56374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:17.253112 kubelet[3994]: E0116 21:21:17.253065 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:21:17.475000 audit[6595]: USER_ACCT pid=6595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.478919 sshd-session[6595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:17.480115 sshd[6595]: Accepted publickey for core from 10.200.16.10 port 56374 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:17.475000 audit[6595]: CRED_ACQ pid=6595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.487131 systemd-logind[2504]: New session 23 of user core. Jan 16 21:21:17.489435 kernel: audit: type=1101 audit(1768598477.475:881): pid=6595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.489488 kernel: audit: type=1103 audit(1768598477.475:882): pid=6595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.492145 kernel: audit: type=1006 audit(1768598477.475:883): pid=6595 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 16 21:21:17.491993 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 21:21:17.475000 audit[6595]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd32ce2ad0 a2=3 a3=0 items=0 ppid=1 pid=6595 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.497394 kernel: audit: type=1300 audit(1768598477.475:883): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd32ce2ad0 a2=3 a3=0 items=0 ppid=1 pid=6595 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.475000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:17.500522 kernel: audit: type=1327 audit(1768598477.475:883): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:17.496000 audit[6595]: USER_START pid=6595 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.515116 kernel: audit: type=1105 audit(1768598477.496:884): pid=6595 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.498000 audit[6624]: CRED_ACQ pid=6624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.522531 kernel: audit: type=1103 audit(1768598477.498:885): pid=6624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.831758 sshd[6624]: Connection closed by 10.200.16.10 port 56374 Jan 16 21:21:17.832013 sshd-session[6595]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:17.830000 audit[6595]: USER_END pid=6595 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.835886 systemd[1]: sshd@19-10.200.8.41:22-10.200.16.10:56374.service: Deactivated successfully. Jan 16 21:21:17.837779 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 21:21:17.840848 kernel: audit: type=1106 audit(1768598477.830:886): pid=6595 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.831000 audit[6595]: CRED_DISP pid=6595 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:17.841391 systemd-logind[2504]: Session 23 logged out. Waiting for processes to exit. Jan 16 21:21:17.842656 systemd-logind[2504]: Removed session 23. Jan 16 21:21:17.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.41:22-10.200.16.10:56374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:17.845987 kernel: audit: type=1104 audit(1768598477.831:887): pid=6595 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:22.944765 systemd[1]: Started sshd@20-10.200.8.41:22-10.200.16.10:54890.service - OpenSSH per-connection server daemon (10.200.16.10:54890). Jan 16 21:21:22.951070 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:22.951139 kernel: audit: type=1130 audit(1768598482.944:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.41:22-10.200.16.10:54890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:22.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.41:22-10.200.16.10:54890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:23.496000 audit[6645]: USER_ACCT pid=6645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.499288 sshd[6645]: Accepted publickey for core from 10.200.16.10 port 54890 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:23.502321 sshd-session[6645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:23.503157 kernel: audit: type=1101 audit(1768598483.496:890): pid=6645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.500000 audit[6645]: CRED_ACQ pid=6645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.511470 kernel: audit: type=1103 audit(1768598483.500:891): pid=6645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.510561 systemd-logind[2504]: New session 24 of user core. Jan 16 21:21:23.515874 kernel: audit: type=1006 audit(1768598483.500:892): pid=6645 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 16 21:21:23.515929 kernel: audit: type=1300 audit(1768598483.500:892): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1a601ae0 a2=3 a3=0 items=0 ppid=1 pid=6645 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:23.500000 audit[6645]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1a601ae0 a2=3 a3=0 items=0 ppid=1 pid=6645 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:23.517541 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 21:21:23.521430 kernel: audit: type=1327 audit(1768598483.500:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:23.500000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:23.524000 audit[6645]: USER_START pid=6645 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.531985 kernel: audit: type=1105 audit(1768598483.524:893): pid=6645 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.531000 audit[6649]: CRED_ACQ pid=6649 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.536851 kernel: audit: type=1103 audit(1768598483.531:894): pid=6649 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.848110 sshd[6649]: Connection closed by 10.200.16.10 port 54890 Jan 16 21:21:23.848562 sshd-session[6645]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:23.848000 audit[6645]: USER_END pid=6645 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.853985 systemd-logind[2504]: Session 24 logged out. Waiting for processes to exit. Jan 16 21:21:23.854508 systemd[1]: sshd@20-10.200.8.41:22-10.200.16.10:54890.service: Deactivated successfully. Jan 16 21:21:23.848000 audit[6645]: CRED_DISP pid=6645 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.857895 kernel: audit: type=1106 audit(1768598483.848:895): pid=6645 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.857948 kernel: audit: type=1104 audit(1768598483.848:896): pid=6645 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:23.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.41:22-10.200.16.10:54890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:23.861021 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 21:21:23.863064 systemd-logind[2504]: Removed session 24. Jan 16 21:21:24.253469 containerd[2539]: time="2026-01-16T21:21:24.253438068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:21:24.515381 containerd[2539]: time="2026-01-16T21:21:24.515216166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:24.517906 containerd[2539]: time="2026-01-16T21:21:24.517826349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:21:24.517906 containerd[2539]: time="2026-01-16T21:21:24.517870599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:24.518050 kubelet[3994]: E0116 21:21:24.518005 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:24.518538 kubelet[3994]: E0116 21:21:24.518061 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:24.518538 kubelet[3994]: E0116 21:21:24.518179 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dm8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fcf4db84-9ph5q_calico-apiserver(2ea28179-07a2-4a0c-9cc7-b4eadca6090c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:24.519384 kubelet[3994]: E0116 21:21:24.519345 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:21:25.254049 kubelet[3994]: E0116 21:21:25.253724 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:21:25.254222 containerd[2539]: time="2026-01-16T21:21:25.253958515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:21:25.506294 containerd[2539]: time="2026-01-16T21:21:25.506219754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:25.509336 containerd[2539]: time="2026-01-16T21:21:25.509246556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:21:25.509336 containerd[2539]: time="2026-01-16T21:21:25.509317158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:25.509557 kubelet[3994]: E0116 21:21:25.509535 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:21:25.509624 kubelet[3994]: E0116 21:21:25.509613 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:21:25.510162 kubelet[3994]: E0116 21:21:25.509751 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4e06177460524888ac9b32b3cfd69bc3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4gwbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589885c4ff-m24sm_calico-system(6049a9b7-c23b-4cd6-b5e0-33d5a278ce35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:25.512312 containerd[2539]: time="2026-01-16T21:21:25.512280390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:21:25.759203 containerd[2539]: time="2026-01-16T21:21:25.758935561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:25.761756 containerd[2539]: time="2026-01-16T21:21:25.761700446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:21:25.761756 containerd[2539]: time="2026-01-16T21:21:25.761730764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:25.761883 kubelet[3994]: E0116 21:21:25.761846 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:21:25.762268 kubelet[3994]: E0116 21:21:25.761883 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:21:25.762268 kubelet[3994]: E0116 21:21:25.761981 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gwbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589885c4ff-m24sm_calico-system(6049a9b7-c23b-4cd6-b5e0-33d5a278ce35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:25.763470 kubelet[3994]: E0116 21:21:25.763435 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:21:28.257052 kubelet[3994]: E0116 21:21:28.256365 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:21:28.960808 systemd[1]: Started sshd@21-10.200.8.41:22-10.200.16.10:54898.service - OpenSSH per-connection server daemon (10.200.16.10:54898). Jan 16 21:21:28.966968 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:28.967000 kernel: audit: type=1130 audit(1768598488.960:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.41:22-10.200.16.10:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:28.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.41:22-10.200.16.10:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:29.253563 containerd[2539]: time="2026-01-16T21:21:29.253531172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:21:29.499389 containerd[2539]: time="2026-01-16T21:21:29.499311322Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:29.501875 containerd[2539]: time="2026-01-16T21:21:29.501768327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:21:29.503967 containerd[2539]: time="2026-01-16T21:21:29.501865718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:29.504179 kubelet[3994]: E0116 21:21:29.504134 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:21:29.504179 kubelet[3994]: E0116 21:21:29.504168 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:21:29.505393 kubelet[3994]: E0116 21:21:29.504547 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjs4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:29.507784 containerd[2539]: time="2026-01-16T21:21:29.507733453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:21:29.507000 audit[6678]: USER_ACCT pid=6678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.508470 sshd[6678]: Accepted publickey for core from 10.200.16.10 port 54898 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:29.513056 sshd-session[6678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:29.514930 kernel: audit: type=1101 audit(1768598489.507:899): pid=6678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.507000 audit[6678]: CRED_ACQ pid=6678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.525886 kernel: audit: type=1103 audit(1768598489.507:900): pid=6678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.534048 systemd-logind[2504]: New session 25 of user core. Jan 16 21:21:29.534851 kernel: audit: type=1006 audit(1768598489.507:901): pid=6678 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 16 21:21:29.507000 audit[6678]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc8958ec0 a2=3 a3=0 items=0 ppid=1 pid=6678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:29.538315 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 21:21:29.544848 kernel: audit: type=1300 audit(1768598489.507:901): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc8958ec0 a2=3 a3=0 items=0 ppid=1 pid=6678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:29.507000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:29.551880 kernel: audit: type=1327 audit(1768598489.507:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:29.546000 audit[6678]: USER_START pid=6678 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.561892 kernel: audit: type=1105 audit(1768598489.546:902): pid=6678 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.547000 audit[6682]: CRED_ACQ pid=6682 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.570849 kernel: audit: type=1103 audit(1768598489.547:903): pid=6682 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.749820 containerd[2539]: time="2026-01-16T21:21:29.749765799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:29.753569 containerd[2539]: time="2026-01-16T21:21:29.753489576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:21:29.753569 containerd[2539]: time="2026-01-16T21:21:29.753551771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:29.754672 kubelet[3994]: E0116 21:21:29.754534 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:21:29.754672 kubelet[3994]: E0116 21:21:29.754611 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:21:29.756189 kubelet[3994]: E0116 21:21:29.756134 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjs4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7pf8t_calico-system(ad5a70af-916a-4e95-9866-1f1c8f4329d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:29.757486 kubelet[3994]: E0116 21:21:29.757441 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0" Jan 16 21:21:29.877366 sshd[6682]: Connection closed by 10.200.16.10 port 54898 Jan 16 21:21:29.877960 sshd-session[6678]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:29.880000 audit[6678]: USER_END pid=6678 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.885256 systemd[1]: sshd@21-10.200.8.41:22-10.200.16.10:54898.service: Deactivated successfully. Jan 16 21:21:29.890430 kernel: audit: type=1106 audit(1768598489.880:904): pid=6678 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.890251 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 21:21:29.892598 systemd-logind[2504]: Session 25 logged out. Waiting for processes to exit. Jan 16 21:21:29.893451 systemd-logind[2504]: Removed session 25. Jan 16 21:21:29.880000 audit[6678]: CRED_DISP pid=6678 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.899850 kernel: audit: type=1104 audit(1768598489.880:905): pid=6678 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:29.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.41:22-10.200.16.10:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:30.255641 containerd[2539]: time="2026-01-16T21:21:30.255537446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:21:30.507760 containerd[2539]: time="2026-01-16T21:21:30.507543567Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:30.509987 containerd[2539]: time="2026-01-16T21:21:30.509906842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:21:30.509987 containerd[2539]: time="2026-01-16T21:21:30.509968461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:30.510205 kubelet[3994]: E0116 21:21:30.510180 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:21:30.511125 kubelet[3994]: E0116 21:21:30.510425 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:21:30.511125 kubelet[3994]: E0116 21:21:30.510541 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbbx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x8pr8_calico-system(2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:30.512484 kubelet[3994]: E0116 21:21:30.512459 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:21:31.253728 containerd[2539]: time="2026-01-16T21:21:31.253704323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:21:31.497329 containerd[2539]: time="2026-01-16T21:21:31.497223333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:31.500069 containerd[2539]: time="2026-01-16T21:21:31.499993760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:21:31.500258 containerd[2539]: time="2026-01-16T21:21:31.500048332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:31.500435 kubelet[3994]: E0116 21:21:31.500408 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:21:31.500522 kubelet[3994]: E0116 21:21:31.500501 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:21:31.500829 kubelet[3994]: E0116 21:21:31.500692 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dbrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d6d7b95fc-88tnz_calico-system(010e89e0-6574-4783-aa50-97e803ab00dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:31.502145 kubelet[3994]: E0116 21:21:31.502034 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6d7b95fc-88tnz" podUID="010e89e0-6574-4783-aa50-97e803ab00dc" Jan 16 21:21:34.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.41:22-10.200.16.10:48426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:34.991114 systemd[1]: Started sshd@22-10.200.8.41:22-10.200.16.10:48426.service - OpenSSH per-connection server daemon (10.200.16.10:48426). Jan 16 21:21:34.992593 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:34.992662 kernel: audit: type=1130 audit(1768598494.989:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.41:22-10.200.16.10:48426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:35.537000 audit[6694]: USER_ACCT pid=6694 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.541095 sshd-session[6694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:35.544538 sshd[6694]: Accepted publickey for core from 10.200.16.10 port 48426 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:35.548357 kernel: audit: type=1101 audit(1768598495.537:908): pid=6694 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.548606 kernel: audit: type=1103 audit(1768598495.537:909): pid=6694 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.537000 audit[6694]: CRED_ACQ pid=6694 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.554553 systemd-logind[2504]: New session 26 of user core. Jan 16 21:21:35.557859 kernel: audit: type=1006 audit(1768598495.537:910): pid=6694 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 16 21:21:35.537000 audit[6694]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefc37a720 a2=3 a3=0 items=0 ppid=1 pid=6694 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:35.537000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:35.563683 kernel: audit: type=1300 audit(1768598495.537:910): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefc37a720 a2=3 a3=0 items=0 ppid=1 pid=6694 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:35.563729 kernel: audit: type=1327 audit(1768598495.537:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:35.566004 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 16 21:21:35.567000 audit[6694]: USER_START pid=6694 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.575866 kernel: audit: type=1105 audit(1768598495.567:911): pid=6694 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.574000 audit[6698]: CRED_ACQ pid=6698 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.580848 kernel: audit: type=1103 audit(1768598495.574:912): pid=6698 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.896894 sshd[6698]: Connection closed by 10.200.16.10 port 48426 Jan 16 21:21:35.897477 sshd-session[6694]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:35.909289 kernel: audit: type=1106 audit(1768598495.896:913): pid=6694 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.896000 audit[6694]: USER_END pid=6694 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.909858 systemd[1]: sshd@22-10.200.8.41:22-10.200.16.10:48426.service: Deactivated successfully. Jan 16 21:21:35.912690 systemd[1]: session-26.scope: Deactivated successfully. Jan 16 21:21:35.914401 systemd-logind[2504]: Session 26 logged out. Waiting for processes to exit. Jan 16 21:21:35.915667 systemd-logind[2504]: Removed session 26. Jan 16 21:21:35.896000 audit[6694]: CRED_DISP pid=6694 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:35.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.41:22-10.200.16.10:48426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:35.923908 kernel: audit: type=1104 audit(1768598495.896:914): pid=6694 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:36.260002 containerd[2539]: time="2026-01-16T21:21:36.259966283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:21:36.260279 kubelet[3994]: E0116 21:21:36.260024 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589885c4ff-m24sm" podUID="6049a9b7-c23b-4cd6-b5e0-33d5a278ce35" Jan 16 21:21:36.518543 containerd[2539]: time="2026-01-16T21:21:36.518458781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:36.520948 containerd[2539]: time="2026-01-16T21:21:36.520906758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:21:36.521034 containerd[2539]: time="2026-01-16T21:21:36.520912169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:36.521142 kubelet[3994]: E0116 21:21:36.521085 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:36.521219 kubelet[3994]: E0116 21:21:36.521151 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:36.521487 kubelet[3994]: E0116 21:21:36.521256 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntzvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d4c957-cnh4g_calico-apiserver(be8f7779-a5b2-41ff-909f-1387d7e3242a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:36.522792 kubelet[3994]: E0116 21:21:36.522737 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-cnh4g" podUID="be8f7779-a5b2-41ff-909f-1387d7e3242a" Jan 16 21:21:37.252935 kubelet[3994]: E0116 21:21:37.252742 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fcf4db84-9ph5q" podUID="2ea28179-07a2-4a0c-9cc7-b4eadca6090c" Jan 16 21:21:41.021488 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:41.021586 kernel: audit: type=1130 audit(1768598501.010:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.41:22-10.200.16.10:47716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:41.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.41:22-10.200.16.10:47716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:41.011674 systemd[1]: Started sshd@23-10.200.8.41:22-10.200.16.10:47716.service - OpenSSH per-connection server daemon (10.200.16.10:47716). Jan 16 21:21:41.581000 audit[6710]: USER_ACCT pid=6710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.584364 sshd-session[6710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:41.587477 sshd[6710]: Accepted publickey for core from 10.200.16.10 port 47716 ssh2: RSA SHA256:lsgdf9kTfCkRVY9y0PjBebpa1KHlJVorBVCG2klsKns Jan 16 21:21:41.581000 audit[6710]: CRED_ACQ pid=6710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.592200 kernel: audit: type=1101 audit(1768598501.581:917): pid=6710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.592274 kernel: audit: type=1103 audit(1768598501.581:918): pid=6710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.596303 kernel: audit: type=1006 audit(1768598501.581:919): pid=6710 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 16 21:21:41.581000 audit[6710]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff226b2800 a2=3 a3=0 items=0 ppid=1 pid=6710 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:41.597813 systemd-logind[2504]: New session 27 of user core. Jan 16 21:21:41.602585 kernel: audit: type=1300 audit(1768598501.581:919): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff226b2800 a2=3 a3=0 items=0 ppid=1 pid=6710 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:41.581000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:41.606651 kernel: audit: type=1327 audit(1768598501.581:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:41.608007 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 16 21:21:41.614000 audit[6710]: USER_START pid=6710 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.623000 audit[6714]: CRED_ACQ pid=6714 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.629326 kernel: audit: type=1105 audit(1768598501.614:920): pid=6710 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.629389 kernel: audit: type=1103 audit(1768598501.623:921): pid=6714 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.938189 sshd[6714]: Connection closed by 10.200.16.10 port 47716 Jan 16 21:21:41.938966 sshd-session[6710]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:41.939000 audit[6710]: USER_END pid=6710 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.942249 systemd-logind[2504]: Session 27 logged out. Waiting for processes to exit. Jan 16 21:21:41.944141 systemd[1]: sshd@23-10.200.8.41:22-10.200.16.10:47716.service: Deactivated successfully. Jan 16 21:21:41.946658 systemd[1]: session-27.scope: Deactivated successfully. Jan 16 21:21:41.949716 kernel: audit: type=1106 audit(1768598501.939:922): pid=6710 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.949767 kernel: audit: type=1104 audit(1768598501.939:923): pid=6710 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.939000 audit[6710]: CRED_DISP pid=6710 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 16 21:21:41.948655 systemd-logind[2504]: Removed session 27. Jan 16 21:21:41.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.41:22-10.200.16.10:47716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:42.255744 kubelet[3994]: E0116 21:21:42.255714 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8pr8" podUID="2fb4ecc8-b071-4dfc-9368-f4bc508fc3ef" Jan 16 21:21:43.252694 containerd[2539]: time="2026-01-16T21:21:43.252603413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:21:43.500818 containerd[2539]: time="2026-01-16T21:21:43.500777851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:43.503411 containerd[2539]: time="2026-01-16T21:21:43.503342526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:21:43.503411 containerd[2539]: time="2026-01-16T21:21:43.503407597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:43.503852 kubelet[3994]: E0116 21:21:43.503748 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:43.503852 kubelet[3994]: E0116 21:21:43.503802 3994 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:43.504528 kubelet[3994]: E0116 21:21:43.504485 3994 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m69pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d4c957-8nr7d_calico-apiserver(8b6a9edf-78a2-4eb2-9228-633a08a758ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:43.506443 kubelet[3994]: E0116 21:21:43.506407 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d4c957-8nr7d" podUID="8b6a9edf-78a2-4eb2-9228-633a08a758ae" Jan 16 21:21:44.255246 kubelet[3994]: E0116 21:21:44.255008 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pf8t" podUID="ad5a70af-916a-4e95-9866-1f1c8f4329d0"